Skip navigation


1 2 3 Previous Next

Internet of Things

520 posts

The following very interesting paper from DEFCON shows how vulnerable MQTT brokers can be if the designers are not carefully considering the various attack vectors that can be exploited in an IoT solution powered by MQTT.


The paper shows how one can find and access MQTT brokers on the Internet and perform actions such as open prison doors, change radiation levels, and so on. Your personal information may already be available via a public MQTT broker.


Since MQTT brokers listen on a port number, a simple port scanner can find the broker. The device search engine Shodan now includes searches for MQTT brokers. The paper goes into how to use Shodan to find public brokers and then uses commands that reveal every device connected to the broker.




Unfortunately, MQTT has many attack vectors the IoT system designers must consider when using MQTT.


In addition, many MQTT brokers include special debug commands that make it possible to find all connected devices, thus greatly extending the number of possible attack vectors.

In contrast, the SMQ IoT protocol (a pub/sub protocol similar to MQTT) has a very limited set of attack vectors compared to MQTT. You can completely hide the SMQ broker from automated tools such as Shodan and other port scanners. This is possible since an SMQ connection initially starts as HTTP(S) and a port scanner cannot see the difference between an SMQ broker and a standard web server. In addition, the URL to the broker can be private.


Since SMQ runs within an application server, no public debug API is necessary. SMQ provides an extended privilege API, but this API is only accessible to application code that runs on the application server and interacts directly with the SMQ broker.


SMQ provides hash based authentication, a feature required when not communicating over SSL. MQTT sends credentials in clear text. However, both protocols will be more secure when communication is protected by TLS. Note that TLS alone will not protect against many of the vulnerabilities mentioned in the paper. For this reason, system designers must have a good understanding of IoT security or the designers must seek help from experienced IoT security specialists by, for example, using the support line provided with commercial security products.


Read the DEFCON MQTT paper

ARM mbed is now hosting a new event specifically for developers, ARM mbed Connect - USA. Tu0-weu-d4-883943924fb0af8cd9ebaf549e25c979^pimgpsh_fullsize_distr.pnghis event will give attendees a leading edge in building innovative and scalable Internet of Things (IoT) solutions!


Be sure to register to secure your spot, as spaces are limited. ARM mbed Connect – USA will be held at the Santa Clara Convention Center, just ahead of ARM TechCon on Monday, 24 October 2016. 


Join us for a day full of demonstrations, hands-on workshops and the chance to hear the latest news and developments from the mbed team.  By bring


ing together developers from our growing 170,000 developer community, along with the experts from our 60 partners ecosystem at mbed Connect, we’re offering a unique opportunity to work together and shape the future of IoT. 

Not local?  Don’t worry, mbed Connect is also coming to China! Look out for registrations opening soon for mbed Connect – China. The event will be on Monday, 5 December at the Grand Hyatt in Shenzhen.


Stay up to date with the latest on mbed by signing up for an mbed developer account visit


MQTT Library Demo

Posted by dmitryslepov Aug 1, 2016

This is the demo project for Tibbo's MQTT library. The project demonstrates how easy it is to create sophisticated network-enabled applications in Tibbo BASIC and Tibbo C. The code is extremely simple and easy to understand. Take this app and modify it for your MQTT needs.


About The Application

TPS3-based MQTT publisherTPS3-based MQTT subscriber

To illustrate the use of the MQTT library, we have created two simple Tibbo BASIC applications called "mqtt_publisher" and "mqtt_subscriber".

In our MQTT demo, the publisher device is monitoring three buttons (Tibbits #38). This is done through the keypad (kp.) object.

The three buttons on the publisher device correspond to the red, yellow, and green LEDs (Tibbits #39) on the subscriber device.

As buttons are pushed and released, the publisher device calls mqtt_publish() with topics "LED/Red", "LED/Green", and "LED/Red". Each topic's data is either 0 for "button released" or 1 for "button pressed". The related code is in the on_kp() event handler.

The subscriber device subscribes to all three topics with a single call to mqtt_sub() and the line "LED/#". This is done once, inside callback_mqtt_connect_ok().

With every notification message received from the server, the subscriber device gets callback_mqtt_notif() invoked. The LEDs are turned on and off inside this functions's body.

Testing the MQTT demo

The demo was designed to run on our TPS3 boards, but you can easily modify it for other devices.

The easiest way to get the test hardware is to order "MQTTPublisher" and "MQTTSubscriber" TPS configurations.

You can also order all the parts separately:

  • On the publisher side:
    • TPP3 board in the TPB3 enclosure.
    • Your will need Tibbits #00-3 in sockets S1, S3, S5; and
    • Tibbits #38 in sockets S2, S4, S6;
    • You will also need some form of power, i.e. Tibbit #10 and #18, plus a suitable 12V power adaptor.
  • On the subscriber side:
    • TPP3 board in the TPB3 enclosure.
    • Your will need Tibbits #00-3 in sockets S1, S3, S5;
    • Tibbit #39-2 (red) in S2;
    • Tibbit #39-3 (yellow) in S4;
    • Tibbit #39-1 (green) in S6;
    • You will also need some form of power, i.e. Tibbit #10 and #18, plus a suitable 12V power adaptor.

Test steps

  • Install a suitable MQTT server. We suggest HiveMQ (
    • Download the software here: (you will be asked to register).
    • Unzip the downloaded file.
    • Go to the "windows-service" folder and execute "installService.bat".
    • Go to the "bin" folder and launch "run.bat".
    • You do not need to configure any user names or passwords.
  • Open mqtt_publisher and mqtt_subscriber projects in two separate instances of TIDE, then correct the following in the projects' global.tbh files:
    • OWN_IP - assign a suitable unoccupied IP to the publisher and to the subscriber (you know that they will use two different IPs, right?);
    • MQTT_SERVER_HOST - set this to the address of the PC on which your run HiveMQ.
  • Select your subscriber and publisher devices as debug targets, and run corresponding demo apps on them.
  • Press buttons on the publisher to see the LEDs light up on the subscriber.
  • If you are running in debug mode you will see a lot of useful debug info printed in the output panes of both TIDE instances.
  • You can switch into the release mode to see how fast this works without the debug printing.



Securing Medical & Wellness Data

Your health data is one of most important pieces of a data that is personal and confidential to you.   Through the advent of sensor innovations we are finding many more devices gathering this data such as your fitness bands, smartwatch, even phone counting your steps automatically without you having to do anything. This is only the beginning we are starting to see innovations in medical and wellness monitoring from all sorts of devices such as toothbrushes which can detect cancer, to patches you wear that monitor UV exposure or hydration.  Innovations in microfluidic technologies are enabling analysis of your blood, sweat, and urine at price points where it can reach consumers hands in both developed and developing countries. 


This data, if used correctly, will keep us more informed of what’s happening inside and outside our bodies and give us alert with the right information at the right time to make informed decisions.  Taking it one step further, mobile and cloud platforms can enable a holistic system of health to inform close trusted family/friend circle about changes in health to help individuals make the right lifestyle choices.   It will also help caregivers know the right time to intervene, potentially staving off a more severe condition.


Unfortunately, as with any technological innovation, it can also have potential malicious uses resulting in substantial financial and social consequences:

  • Insurance providers could use the data to increase premiums or cancel policies
  • Informed employers may choose healthier candidates (to keep costs down)
  • Dating applications could add medical filters


But how is the data being handled from when it gets created at the source? Is it being guarded all the way from the sensor to the phone, to the cloud? What happens to your data in the cloud? Is it shared with 3rd parties? Have you read to read the Terms and Conditions for each of your digital devices to understand the answers to these questions? In this blog, we will aim to address some of the basic vulnerabilities of data as it travels from sensor->phone->cloud and explore a method to safeguard it as well as talk about some the initiatives taking place to help safeguard our health data.


Threats and Hacks

There are two threat vectors that we will address in this video:

  1. Screen Scrape Attacks
  2. BLE attacks

Screen scrape attacks leverage the ability to “record” the frame buffer of the screen of a device to steal the data as an app renders to the screen.  This technique has been used to steal everything from passwords to high value video content.    The video below demonstrates this threat:




Today the majority of medical and wellness devices utilize Bluetooth LE to communicate between the sensor and use the phone as the “gateway” to go to the cloud.   A large number of these devices tend to rely solely on Bluetooth link layer encryption, this presents a vulnerability in that data can be stolen at the “application layer” while it’s in motion on the phone or gateway itself.   The video below demonstrates this threat.


Protecting medical and wellness data using ARM TrustZone™ based TEE

Trusted execution environments (TEEs), for example from Trustonic or Sequitur Labs, provide a secure environment alongside a Rich OS like Android, to run trusted code.  It can be found in hundreds of thousands of mobile phones already in the market today with that number increasing and services like payment, premium content, and enterprise BYOD increase.  The idea here is simple, we encrypt data from the sensor at the application layer and so even after BLE link layer encryption has taken out the payload, it’s still encrypted and stays encrypted until it lands in the TEE, where it is decrypted rendered, validated then sent onwards to the healthcare provider cloud, keeping the data secure even when it’s in motion on the phone.




Data Ownership - Privacy by Default

We have so far discussed some of the technical vulnerabilities associated with your medical data as it transitions from sensor to phone to cloud, but how about the policies which governs how your data is handled and who is held responsible if your data is breached.


There are many entities who are looking at this very complex problem, which combines both liability as well as accountability for loss or misuse of data. The two references provided below start to shed some insight into the industry and governmental thinking behind how to make patient privacy front and center and to ensure protection and adherence to use of personal medical and fitness data gathered.   This is a rapidly evolving area and I'm excited to watch it unfold!



1.  Every Step You Fake: A Comparative Analysis of Fitness Tracker Privacy and Security

2.  European Union mHealth code of conduct

Bin Feng Microduino and products[1].png

Meet Microduino, the Chinese company that is bringing a DIY approach to rapid prototyping for developers and engineers. Founded in 2012 after originally designing its Arduino-compatible boards to monitor server room temperature, Microduino is Ikea-meets-LEGO: Magnetized modules that can be mixed and matched for various applications.


Each module has a function- e.g. Wi-Fi, Bluetooth, GPS. Snap together the necessary modules to create working prototypes. No messy wiring, dangerous soldering or complex coding. Microduino’s value lies in removing the barrier to entry to enable makers to take their ideas further through the prototype stage and beyond. It’s DIY made (even more) simple.


Potential products range from gesture-controlled music boxes to LED lights, drones, robots, GPS trackers and 3D printers. Microduino's 2013 Kickstarter campaign leveraged the necessary initial traction with its intuitive, open-source approach to refine and apply its designs across a variety of key sectors, including technology, education and environment.


Versatility, scalability

Beyond user-friendly appeal, the modules are versatile, and they scale across a range of systems. The platform can be compatible with a variety of microprocessors and controllers (Arduino UNO, ARM Cortex, Atmel AVR/ATMega MCU etc.). A variety of Microduino systems features ARM processors:

  • Microduino-Core STM32 features an ARM development board with STM32F103CBT6 chip built in.
  • MicroWrt HPin44 series is based on ARM Cortex-A7 and Cortex-A9.
  • MicroPi HPin88 series uses dual-core, quad-core and eight-core SoC.
  • Microduino has developed HiSilicon SoC Hi3517 based on ARM Cortex-A7 and Xilinx Zynq series FPGA board with an ARM core integrated.
  • The company is working with NXP for LPC824 series and MKS22 series that will be used in the NXP Freescale Cup, an intelligent-car race in China.


Microduino’s initiatives have not gone unnoticed by the media. TIME called out Microduino in the article “These 5 Kits can Teach Kids about Computers and Coding.” And the company was featured in “The 5 Best Things from World Maker Faire 2015,” by EE Journal.



Not only is this approach to technology fantastic for makers, it’s superb for education. This Lego-esque design makes this accessible to individuals of different ages, skillsets and backgrounds.

“Never underestimate a student’s creativity; they just need something to inspire it” Bin Feng, Co-founder and CEO of Microduino

Microduino also has embedded its technology in education to encourage innovation within the younger generation--similar to ARM’s collaboration with BBC on the micro:bit initiative. It is working with Maker Space to design and mass-produce a low cost package for younger school children. Microduino has established its two-credit course in superkit education at Tsinghua University. It has also participated in Beijing International Design Week where students were able to design and build a variety of different products themselves using Microduino kits.

“We are eager to see its contributions to STEM/STEAM education in the coming years,” said Dominic Pajak (dominicpajak), ARM marketing director. “Regardless of socioeconomic background, every young mind will be stimulated and inspired to test and believe in his or her own imagination, potentially training up the next generation of engineers and innovators.”


Nesting Instinct

Microduino reached new heights in another application arena. The company’s 2015 environmental initiative was a joint effort with International Centre for Birds of Prey (ICBP) to help solve the problem of declining vulture populations in South Asia. Vultures are key to a healthy ecosystem: They dispose of waste and help prevent the  spread of disease. Conservationists aimed to monitor and collect nest data (temperature, rotation, humidity) during incubation, to help boost vulture populations in captivity. ICBP’s initial attempts used a bulky system, which mother vultures steered clear of. So ICBP approached Microduino with a challenge: Create an artificial egg to mimic a real vulture egg. The solution is the IoT-enabled system Eggduino, a sensor package containing a system of stackable Arduino-compatible microcontrollers and modules. Disguised as a vulture egg, the sensor package was able to fly under the radar in the nest. Data captured by the egg is relayed from the node to the cloud for analysis, all while giving Mother Nature a helping hand.


Now looking to capitalize on its successful venture into IoT, Microduino is positioning its technology for further expansion into the IoT arena in the education and toy markets.


Ultimately, Microduino’s vision is to use modular IoT to solve the needs of all types of IoT systems. This is significant because enabling rapid DIY prototyping broadens technology’s reach by applying it to find solutions to diverse global initiatives. Whether it is a conservation mission to save the vultures or a biomedical scientist’s quest to create 3D printed human organs, the usefulness of this technology applies across a variety of  arenas. Microduino not only places the building blocks for invention in its user’s hands, but it also empowers makers with a vision to create their own solutions to their needs.



“It’s amazing how easily Microduino can turn your ideas into reality” Bin Feng

Related stories:

Maker Faire 2016: Accessible hardware, software drives new development

Maker Faire 2016: What you need to know about Arduino Create

Learn the entire process of setting up the chain of trust for your IoT solution. The video, which is available on YouTube, provides a practical example that you can follow and setup on your own computer for learning purposes. The comprehensive video tutorial guides you through the process of setting up secure and trusted communication. After completing the hands-on tutorials, you will be an expert in using SSL for secure communication and how to create and manage SSL certificates.


The video shows how to create an Elliptic Curve Cryptography (ECC) certificate for the server, how to install the certificate in the server, and how to make the clients connecting to the server trust this certificate. The server in this video is installed on a private/personal computer on a private network for test purposes.


Check out the IoT article at

The article is tailored for learning purposes and DIY projects and includes lots of useful information on using security in memory constrained edge nodes.


How to run your own secure IoT cloud server for $8/year



An illustration of the size difference between an ECC Certificate, an RSA Certificate, and a chained RSA Certificate

Mobile communications has been the base for the explosion in smartphones. Enormous consumer demand for mobile wireless broadband services has driven the last decade of telecom standards resulting in the LTE-Advanced 4G multi-mode devices that we take for granted today.


Beyond serving the needs of smartphones, mobile operators are increasingly thinking about what role they can play in delivering the Internet Of Things or so called IoT. The IoT market is still considered to be in its infancy, but according to industry analyst firm Gartner by 2020 we can expect over 26 billion ‘things’ or devices to be connected to the internet, the majority of those likely to be served via wireless connections.


LTE Cat-M - a cellular standard for IoT

IoT devices will connect to the Internet through wired and wireless communication technologies. The wireless technologies could be both cellular and non-cellular. In the case of the local area unlicensed band standards, for example Bluetooth and WiFi, a router is needed to reach the Internet. The LTE Cat-M (Cat-M1) standard is a cellular standard and has a number of benefits compared to the non-cellular technologies. One obvious benefit is the existing infrastructure for LTE, where operators around the world have been rolling out this technology since 2009. According to GSA, there are now 480 LTE networks launched in 157 countries and Ericsson predicts that more than 70% of the world population will have LTE access by 2020.


There are several key additions to the Cat-M (Cat-M1) specification in 3GPP release 13 providing lower cost and power consumption.


The first LTE specification in release 8 specified 4 categories, with Cat-4 as the highest category supporting up to 150Mbits/s in the downlink. The modem complexity is derived from this category and normalized to it. Cat-0 was specified in release 12 as an intermediate step towards a competitive LTE specification for IoT applications. The complexity of LTE Cat-0 vs LTE Cat-4 is estimated to be reduced by 40% mainly due to lower data rates but also from the change in duplex mode, where half duplex mode eliminates the need of a duplexer and so saves cost. LTE Cat-M (Cat-M1) is an optimized IoT version of Cat-0 where the major change is the system bandwidth reduction from 20MHz to 1.4Mhz. Another important change is the transmit power reduction to 20dBm. This reduction eliminates the need for an external power amplifier and enables a single chip solution, again reducing cost. NB-IoT (Cat-M2) is the next step with a lower bandwidth of 200kHz will further reduce the cost and power consumption.


Mistbase and ARM have written a paper where we have investigated how the new 3GPP Rel-13 standard is enabling IoT, both from a HW/SW architecture point of view as well as from a performance point of view. In this paper we focus on LTE Cat-M (Cat-M1) and look ahead on NB-IoT (Cat-M2) which is Mistbase core business.


Link: White paper: LTE Cat-M - A Cellular Standard for IoT


Mistbase homepage


Presentation from the NDC London Conference earlier this year:

IoT: Gold Rush or Wild West - Niall Cooling on Vimeo



"IoT sits, not unfairly, at the peak of this years Gartner's "Hype for Emerging Technology". This talk tries to cut through the marketing bullshit and attempt to build a taxonomy for what should, and more importantly, what shouldn't be considered a 'Thing' on the Internet. Is examines both the competing infrastructure technologies and the upcoming pseudo-standards. Hopefully by the end you'll be less confused, or at least understand why you're confused!"

The Bluetooth SIG announced the awaited arrival of the new version of the Bluetooth low energy standard: the new Bluetooth 5 standard is indeed faster (twice as fast), and goes farther (quadruple range) than its predecessor Bluetooth 4.2 with additional capacity. This capacity increase is going to enable increased broadcasting content (800% more), resulting in more meaningful and fuller information, thereby unleashing more IoT applications.


With all of these changes coming, Bluetooth low energy will stay true to its low energy credentials. In order to get the speed, range and capacity increases, changes from the transceiver through the stack are required. ARM being the only company to offer RF to stack solutions in house, is in a unique position to lead the market in implementation and enablement of these features.


The Cordio radio IP has had 2Mbps capability for several silicon versions and can support the newly announced enhancements with no additional modifications. We have demonstrated this at Bluetooth World 2016 and recently at DAC 2016.


Have a look at this short video of the demo!

cordio2mbps.jpgNew low power Bluetooth 5 - 2Mbps transfer demo with ARM Cordio based Radio IP


2Mbps Demo - We have two ARM evaluation boards talking to each other with the test chips built with ARM Cordio radio IP – RF transceiver to application, all built in-house. These can switch between the 1Mbps PHY and the higher data rate of 2Mbps PHY. The theoretical maximum throughput at 2Mpbs is 1.5 -1.6Mbps. Besides showcasing the availability and implementation of the higher data rate feature, what is remarkable is that the ARM Cordio IP is efficiently designed to achieve maximum theoretical throughput.


For more information on ARM Cordio please visit

Chinese Version(中文版) :DAC 2016: 物联网设计需要强健的生态系统

AUSTIN, Texas—We argue about the definition and scope of the Internet of Things (IOT) but what, to me, is inarguable is its profound impact on design.

Nandan speaking at Chip Estimate at DAC 2016.jpg

A market with a nearly infinite number of design possibilities is one with which the electronics design world is experiencing for the first time. It has been used to building big tools, platforms and SoCs for big-reward markets: Workstations, personal computers, mobile phones networking gear and the like. It’s at its most efficient and profitable when process, methodology and design coalesce to take advantage of large-volume opportunities.

But IoT is none of that. But yet the industry, from IP to EDA, has pivoted deftly in recent years to start to meet the challenge.


This was on display last week at Design Automation Conference 2016 here.

“What we need to understand about IoT is that it is literally everything—gateways, sensors, smart phones,” said Nandan Nayampally (nandannayampally), vice president of marketing with ARM. “IoT is not a device. It’s not even a segment. It’s delivering service across the cloud to connected devices. In the IoT world, it’s not about delivering the killer SoC; silicon is a vehicle that delivers your killer solution.”


He said that it’s really about bringing multi-disciplinary expertise and using silicon as an enabler for newer, untethered--but connected--use models.


And what’s more important he said, “one size does not fit all.”


Herein lies the challenge: How to develop flexible and scalable IP and the right mix of EDA tools, methodologies and services to serve and enable this market.


And for the developers, how do you pick your spots and with what?


“It’s tough to design monetization up front,” said Frank Schirrmeister (fschirrmeister), senior group director, product management and marketing, with Cadence Design Systems. Schirrmeister said wearables profitability can be elusive, given the uncertain volumes and competition. But there is profitability to found around the device.


“For the startup, the question becomes how will I monetize? Is it the app? Is it the chip?” he said.


Scaling up

Today, most startups are small-scale affairs, some with crowd funding, some with funding from their own pockets. And it’s an era in which interesting designs get started on platforms like Raspberry Pi and Arduino. Remember that just a few short years ago, startups needed big ASIC teams paying hefty NRE to get a design to market.


DAC logo on rug.jpg

This innovation empowerment is extraordinary, but for most it’s only a start, and can be insufficient to achieve escape velocity.

“Lowering the hurdle to design cost and verification/implementation cost is huge,” Schirrmeister said.

Nayampally said it’s all about “time to money.” Enabling that initial design is key, of course, but after that it becomes a race to volume, and in IoT this is a competitive race.


“You can design for one vertical, but that vertical may not be enough to sustain you,” Nayampally said on a panel with Schirmmeister here at DAC. “You may need to scale this design for something that scales horizontally as well. The  one thing we can do is lower the barrier to allow for rapid innovative design.”


Removing Design Roadblocks

ARM continued confronting that challenge this week when it announced an expansion of the DesignStart program and unveiled an ARM-approved design house service. Both are intended to give designers and developers easy access to an ecosystem of tools and support for everything from prototyping to production.


In expanding DesignStart, ARM now offers simplified and expedited access to EDA tooling and design environments from Cadence and Mentor Graphics.


The new ARM Approved Design Partner program provides DesignStart users with a global list of audited design houses for expert support during development. The first four, announced June 6, are Open-Silicon, Sondrel, eInfochips, and SoC Solutions.


“With DesignStart, it’s a simple three steps, easy access to design. You can download Cortex-M0 with a license for free, start playing around with it,” Nayampally said. “Then you can prototype, plus put in your logic into an FPGA. It’s really good for startups and makers to get their designs worked out and get their next level of funding.”


Nayampally said ARM has had partners who’ve taken the system design kit and gone to tapeout in 4 months.


“That’s very good if you’re startup paced,” he said.


Related stories:

--DAC 2016: Just how much security is enough?

-- DAC 2016: ARM expands efforts to speed designs to prototype, production

-- ARM and Cadence Work Together to Simplify IoT Design

-- An exciting new partnership program launched today!

I'd like to introduce Ameba to you.

If you are looking for a high-performance, cheap, and  Arduino compatible board to do some cool projects ── Realtek's Ameba is your right choice.


Ameba features a high-performance CPU (ARM Cortex-M3) and embedded memory. It provides complete networking and Wi-Fi protocols, hardware SSL engine, and various serial ports including UART, I2C, SPI, and PWM. Ameba also integrates with ADC, DAC, high-speed SDIO, and USB to directly control analog-type sensors or devices. For connectivity, Ameba provides software support for Apple Wireless Accessory Configuration (WAC).





– 32-bit ARM Cortex M3, up to 166MHz





Key Features

– Integrated with 802.11 b/g/n 1×1 Wi-Fi

– NFC Tag with Read/Write Function

– 10/100 Ethernet MII/ RMII/RGMII Interface


– SDIO Device/SD card controller

– Hardware SSL engine

– Maximum 30 GPIOs

– 2 SPI Interfaces and support both master and slave mode

– 3 UART Interfaces including 2 HS-UART and one log UART

– 4 I2C Interfaces and support both master and slave mode

– 2 I2S/PCM Interfaces and support both master and slave mode

– 4 PWM interfaces

– 2 ADC interfaces

– 1 DAC interfaces




Some cool stuffs that you can do with this dev. board:


Ameba Remote Car With Video Streaming :

You can find the detailed example here


Gesture Control With Six-axis Gyro:


Ameba Hitachi Aircon. Control Through Cloud Database (Xively):


Smart Plant Watering System:


You can join developers' Facebook group at:

Or visit the web page for more information

How is the Internet of Things landscape like San Francisco in the 1800’s? Well the gold rush gave opportunity to many who came to find their fortune. They sought to move beyond the established order through applying new thinking to tap into the huge reserves that everybody knew was in those hills. Similarly, the IoT gives opportunity to thousands of designers and start-ups who can develop new applications to solve existing problems or use cases in a more effective way. Many of those applications would require integration and performance levels that will drive the need for specialised IoT chips.


ARM® is committed to enabling the IoT everywhere, and part of this means removing the barriers to entry that could streamline the SoC design process.



  • Time: Due to the new nature of the IoT, market trends and requirements can change very suddenly meaning that there is a narrow time window to build a chip and get it to market.
  • Expertise: IoT chips are complex because they require integration of very different types of IP: digital processing, radio or wired connectivity, mixed-signal blocks, low power memory, Flash, … and all this with an ultra low power budget. SoC designers not only need to integrate processors, memory, connectivity and other IP blocks together, but then implement it in a system. Using a range of different tools and methodologies requires expertise.
  • Access: When you are focused on tweaking the design of your SoC to meet your target application, you don’t want to be thinking about the nitty gritty of setting up the right infrastructure for IP storage and EDA tools.




Helping Designers from Concept to Silicon


The announcement that the ARM IoT Subsystem for Cortex®-M has been ported into the Cadence® Hosted Design Solution is a meaningful step in the direction of helping to streamline the process of taking an idea from concept to silicon for the IoT. It is suited to a wide spectrum, from start-ups to experienced SoC designers with the promise of reduced time and increased flexibility.


The package includes processor IP, wireless connectivity solution, interface IP, software, design tools (both front- and back-end), optimized design methodology and scripts.


Beid HDS.png

Example building blocks of an IoT SoC, and floorplan



The combined offering accelerates mixed-signal SoC design for IoT as it incorporates ARM’s IoT subsystem for Cortex-M and Cadence’s Innovus Implementation System, which is now optimized specifically for Cortex-M. The mixed-signal IoT flow is also available on the Cadence Hosted Design Solution (HDS), a Software as a Service (SaaS) model which offers quick access to various EDA tools, a broad portfolio of IP, support and proven methodologies for designing for IoT. By gaining access to the Cadence HDS Enablement Program, designers are able to accelerate time-to-silicon by increasing their productivity, flexibility and level of support. The access to a lower cost hardware infrastructure is also ideal for start-ups and those new to IoT SoC design.



Let’s take a closer look at what’s included:


Processor IP: The IoT subsystem connects to an ARM Cortex-M3 processor. The Cortex-M3 is used in many current IoT devices, and is a good choice to run complex software stacks like mbed OS while keeping the power consumption low. Learn more about the Cortex-M3


Wireless connectivity: The IoT subsystem is ready to connect to ARM Cordio® radio, bringing Bluetooth® Smart connectivity. Integration of the Bluetooth software stack with mbed OS is already done, so software is already available to use it. Unplug and play! Learn more about Cordio Bluetooth LE radio


Interface IP: Cadence interface IP blocks can be very easily connected to the IoT subsystem, and software drivers for common interfaces are already pre-integrated in IoT software available from ARM. This makes it easier and faster to build applications connecting to sensors and other peripheral components.


Software: Software development is a very large part of the design time. The IoT subsystem has been developed in sync with ARM’s open-source mbed™ OS, and hardware drivers are already available when you download the software stack. This makes an SoC based on the IoT subsystem very attractive to software developers, who can start creating applications and use all the features of mbed OS right away. Learn more about mbed OS


Front-end design tools: The Cadence Genus™ Synthesis Solution delivers the best possible productivity during register-transfer-level (RTL) design and the highest quality of results (QoR) in final implementation.

Learn more about Genus


Back-end design tools: The Cadence Innovus™ Implementation System is a physical implementation tool that delivers typically 10-20% production-proven power, performance, and area (PPA) advantages along with up to 10X TAT gain at advanced 16/14/10nm FinFET designs as well as at established processes. Learn more about Innovus



It has been over a year since ARM announced the IoT Subsystem for Cortex-M to help developers reduce the time and risk involved in SoC design. Since then, ARM has publically displayed silicon as part of a test chip that took 3 engineers 3 months to tape out. You can find out more in liamdillon's blog: ARM enables IOT with Beetle Platform.


Cadence’s Hosted Design Solution (HDS) is the ideal environment to help designers get started on their IoT development. The ability to remotely access proven EDA tools and support makes it easy for teams to work no matter where they are in the world.


Cadence HDS.png


What all of this adds up to is a complete end-to-end solution for designers. Reducing the time and expertise needed to enter the market spreads the opportunity that IoT brings to a much wider audience than before. Enabling even more designers can be the catalyst to accelerated innovation and targeted applications for new use cases.




Innovation Comes From Removing Barriers to Entry


If we truly want to make it the Internet of Everything, then we need to make it as easy as possible for people to develop applications that will shape the future. Reducing the barriers to doing so; cost, expertise and time, is a large step towards enabling a wide range of designers begin development of their idea.


Enabling the creation of optimized SoCs with a simpler process will also drive innovation and make possible a new range of applications that could not be implemented with off-the-shelf components due to cost, size or power-consumption issues. ARM and Cadence therefore enable a new world of opportunities to companies that previously could not afford this level of integration into their products.

With a little space between myself and the Bay Area Maker Faire, I want to follow up on some of the themes I explored in my previous blog (Industrial Makers? BeagleBone, Rasberry Pi and Arduino Move Towards Modules) on industrialization of Maker platforms. While not a showcase for the latest commercial embedded applications, Maker Faire is a natural home for the Arduinos and Raspberry Pi’s and a great place to spot these platforms being used for interesting things.Maker Faire Robot photo.jpeg


I spotted a number of BeagleBone Blacks finding their way into more serious robotics and machining applications. The Open ROV project has a completely open source underwater drone design, with the primary control software running on Linux on the BeagleBone. The project has truly embraced open source, with all hardware schematics, electrical and mechanical, and software available online.


The BeagleBone Black was also the brains behind a stunning five-axis desktop CNC milling machine from PocketNC. 3D printing may have captured most of the fabrication hype in recent years, but CNC milling is, at this point, a more precise, versatile and production-quality form of machining. The ‘democratization’ of this kind of tool to a price and usability level that will put it in the hands of a much wider audience has interesting implications. Could things like this unlock micro-manufacturing on demand, much closer to the end of the supply chain? Or will these machines remain the domain of enthusiasts and local maker spaces?


Two more projects crossed my radar that I’ll explore further. Robbie the Robot was the result of a 24-hour hackathon entry by a team from Finger Food Studios. Robbie explores anthropomorphism and human recycling behaviors, seeking to utilize empathy to educate, build awareness and encourage more recycling. What I found very interesting about Robbie was that he is a robot built by a team consisting primarily of software app developers. Robbie’s personality is an Android application running on a Qualcomm 410c Dragonboard. This is interfaced to a Raspberry Pi handling the I/O to his various sensors and actuators. A Bluetooth Low Energy (BLE) interface connects to an iOS app providing additional control and interfacing. Robbie uses several AT&T APIs as well including SMS, text-to-speech, speech-to-text, M2X, Sponsored Data and Data Rewards. Robbie is a great example of how the evolution of platforms and expanding support for ecosystems such as Android, bring new types of developers and engineers into traditionally embedded fields like robotics. Accessible hardware and software makes this possible and could unlock talent and insight from other disciplines for IoT and embedded.


The final project I’d like to share is hard not to talk about without boyish enthusiasm. I grew up in awe of the Space Shuttle missions, the brave women and men exploring a new frontier, and that almost mythical agency making it all happen – NASA. This past weekend I got to hang out with a real NASA engineer, who showed me pictures of his project in space as we talked about some of the interesting work his division is doing with micro-satellites. The Nodes (Network and Operations Demonstration Spacecraft) project was launched from the ISS last week as a testbed for mesh networking protocols being developed with an eye towards developing swarms of micro-satellites for applications such as mapping the earth’s orbital radiation environment. These satellites primary control software is an Android app. Yes, you read that right – an Android app.


Open RoV photo from Maker Faire.jpegThe internal electronics are actually the main board from a Nexus S smartphone, alongside a couple of Arduino boards. The smartphone board provides the main apps processor and hosts the main program, but the satellite also uses several mems sensors on board, such as the magnetometer and accelerometer. The Arduino devices provide the watchdog program, power management, and attitude control. The main board interfaces to the Arduinos and radio hardware via the UART on the USB.


I had a fantastic talk with the engineer around the theme of ‘good enough’ hardware. Obviously off-the-shelf consumer electronics and hobbiest platforms like Arduino don’t hit the same levels of reliability as high-spec industrial parts. But they get better and better all the time, and the software environments around them improve at a lightning pace. They may not fit the bill for mission-critical applications, but in a meshing swarm of micro satellites, individual node reliability can be rapidly compensated for if the network as a whole remains functional. Utilizing common devices and software platform obviously reduces cost and development time, definitely for prototyping and depending on the end use case, even for production. Good enough for NASA!


I think these projects all demonstrate the interesting places where a generation of extremely accessible hardware and software platforms is taking us. The lines are blurring between embedded and other disciplines as the IoT really starts to ramp, and this dovetails interestingly with a world where the tools of the trade have never been cheaper or easier to get started with.


Related stories:

Industrial Makers? BeagleBone, Rasberry Pi and Arduino Move Towards Modules

Maker Faire 2016: Accessible hardware, software drives new development

What I learned at World Maker Faire, New York 2015


Hello and welcome back to this two part blog series where we revisit our Sensors to Servers demonstration (part 1 can be found here). In this instalment we will take a look at how we modified the demo to enable cloud hosting and featured it running concurrently at Mobile World Congress and Embedded World 2016.



Before retiring Sensors to Servers we decided to give it one last hurrah to show off some of ARM®’s latest products, notably mbed OS and mbed Device Connector. This year's Mobile World Congress (MWC) in Barcelona and Embedded World (EW) in Nuremberg were the perfect stages as these two major trade shows happened to be held on the same week. We came up with the idea of displaying the live data from both shows, at both shows. Where in all previous deployments we used a local ARMv8-A based 64-bit server, to make this work we had to put the entire back end of the demo in the "cloud" and update our sensor node software to work with this topography.



System diagram (click to enlarge)


You can see in the diagram that each show required an internet connected router. If you recall the sensor nodes can communicate with the server using either 6LoWPAN or Ethernet. Typically, large trade show floors tend to create hostile RF environments, which led us to choose Ethernet to guarantee a stable and reliable connection. We connected up all of the sensor nodes and camera feed (see below) to the router. To view the visualisations all we needed was a web browser running on a Internet connected ARM Cortex-A based client device.


Sensors Side: mbed Classic to mbed OS

Approximately a year ago ARM announced its plans for the next generation of mbed, mbed OS. mbed OS (v3.0) is superseding mbed "Classic" as our Internet of Things (IoT) embedded operating system for ARM Cortex-M based microcontrollers. mbed OS went into its beta release phase in August last year. We immediately got our hands on the yotta build tools and started playing around with the new software. At the time there was no integrated development environment (IDE) support for the tools so we downloaded and customised a "clean" version of the open-source Eclipse IDE to manage the project, edit source files and run the yotta commands to update modules, build etc.


Sensor node


Once we were familiar with the OS and tools we quickly turned our attention to porting our sensor node application software from mbed "Classic" to mbed OS. When porting between the two versions the main difference to be aware of is that mbed OS has an event driven architecture rather than a multi-threaded real time operating system (RTOS). This is due to the highly integrated security and power management functions which allows developers to pull in a variety of different communication stacks while the OS keeps the application secure and power efficient. Luckily for us we had already written our original node software in an event driven manner so the port was fairly straight forward.


We were able to use the same sensor libraries as we had previously. Some of these libraries were imported form GitHub repositories and some were custom written. The built-in scheduler used in mbed OS (MINAR) allowed us to post periodic callbacks to read and update sensor data where appropriate. Two versions of the node software were written; one for 6LoWPAN communications and one for Ethernet communications. We used a modular approach for integrating the sensor's libraries' allowing us to choose which sensors were active in any one node so the software project was as flexible as the hardware.


Server Side: Local Server to Cloud Hosting

Next, we turned our attention to the server. We needed to determine how the sensor data should be collected and handled in the cloud. For this, mbed has two offerings; mbed Device Server and mbed Device Connector. To make the distinction mbed Device Server is the middleware that connects your IoT devices to your web applications and mbed Device Connector is a cloud hosted service including mbed Device Server and a developer console. This allows your mbed Enabled IoT device to connect to the cloud without the need to build your own infrastructure.


ARM booth at MWCARM booth at EW

ARM booths at MWC (left) and EW (right) 2016


To move from our local server to the cloud we first had to choose a third party cloud service. We choose Microsoft® Azure cloud computing platform. I would love to give a technical reason why we choose Azure but being honest it was recommend to us by ARM's IT department as they had used them for previous projects but frankly anyone one of our cloud partners would have been suitable.


Previously, we had written an application which used Device Server's REST APIs to filter and post the received sensor updates in to a SQLite database. The original application was written in Java. With the updated version of Device Server and the switch to the cloud we decided to move from Java to Node.JS®. This did mean we had to re-write our application but Node.JS made it much easier to handle the REST APIs and it was only a few hours work. To test the demo, some of my colleagues took a bunch of sensor nodes home and plugged them in to their home LANs. A tweak of their firewall settings and they were away. Now we were ready to plug our sensor nodes in to any internet connected router anywhere in the world and our application would receive the updates.



Apart from the odd tweak, how we visualised the collected data was largely unchanged from the original version of the demo. However to contextualise the data from the two different locations we added a small camera feed on one of the three scrolling pages. One interesting note here is how we displayed the visualisations at Embedded World; the design of the booth left us with only a very small compartment to hide away our equipment. Where we would normally have use a Google Chromebook we were able to use the ASUS Chrombit CS10 powered by an ARM Cortex-A17 and ARM Mali-T760 based system on chip (SoC) from Rockchip. This small HDMI stick running Chrome OS was perfect for hiding away behind the monitor while giving us the same functionality as using a clamshell Chromebook.


Visualisations page 1Visualisations page 2Visualisations page 3

Data captured during Embedded World 2016 (click to enlarge)


Demo setup at EW

Sensors to Servers demo station during Embedded World 2016 setup


Live Camera Feed

Once every minute a still image was captured at each show and displayed on the corresponding visualisation. The camera feed came courtesy of a quad-core ARM Cortex-A7 powered Raspberry Pi 2 and Raspberry Pi Camera Module. The Raspberry Pi was running the Raspbian Jessie Lite Linux based operating system. A small bash script was written and scheduled to run once every minute by a cron job. The bash script captured the image using the raspistill command line tool and upload the data to the cloud server via the curl command line tool using HTTP. By compressing the images down to only several hundred kilobytes we were able to minimise the upload time and we were able to save a copy of each image on the Raspberry Pi's memory card to create a time lapse video.


Camera setup



That's it for these two Sensors to Servers demo revisited blogs, I hope you've enjoyed reading them. If you are visiting a large trade show in the future be sure to call by the ARM booth and see what great demos ARM are showcasing, our friendly and knowledgeable engineers will be very happy to give you a demonstration and answer any questions you may have on ARM and our technology. Thanks for reading!

Filter Blog

By date:
By tag:

More Like This