Developers of software for edge systems face increasing complexity as hardware heterogeneity expands. The portable, frictionless container-based development and deployment model differs significantly from traditional embedded systems development flows that are highly optimized for specific SoCs and boards.
Arm has been examining how edge computing must ultimately evolve in the future to enable the next phase of digital deployment at scale. As part of this work, we have sought possible ways to deploy workloads on hybrid systems; boards with an SoC containing a Cortex-A plus a Cortex-M/R. We have implemented a proof-of-concept to show we can deploy applications onto Cortex-M within a hybrid system from a Cortex-A using cloud-native orchestration tools.
This blog post highlights the motivations and benefits of the hybrid-runtime and gives a high-level overview of its mechanisms and links to a learning path to help you run your own hybrid systems deployment demo.
The hybrid runtime enables the deployment of software onto other processors, Cortex-M/Cortex-R, in the system with Linux running on the application cores and using cloud-native technologies.
This makes firmware updates easy, secure, and controllable at scale. Firmware on Cortex-M microcontrollers is normally updated to address bugs, security vulnerabilities, performance or to upgrade functionality. The hybrid runtime makes it possible to update what is running on the Cortex-M on-demand and perform one of these functions.
The hybrid runtime also enables the partition of applications into multiple parts where each part runs on a different core depending on its requirements. For example, in a scenario where we want to preserve energy, one part can run on a Cortex-M while the main CPU is asleep. Once an event is detected Cortex-A is woken up and can start running its part of the application. The parts can then be deployed and managed uniformly, meaning we can deploy the software running on the Cortex-M cores as an integral part of the total application deployment. All the software running on both Cortex-A and Cortex-M can be deployed and managed in unison.
We can therefore leverage existing cores on every edge node. By making them easily accessible, we can take fully advantage of all the resources available in our system.
We can deploy an application that is segmented into multiple services using k3s, where the goal for the part of the application running on the embedded core, Cortex-M or Cortex-R, is listen for an event to be triggered, once it is triggered, the embedded application wakes up the application core that put down to sleep to save power.
Each processing element is dedicated to run a separate functionality, if we look at the example of a drone, a real-time drone motor control is handled by a Cortex-M subsystem while the Cortex-A is running a high-level wayfinding camera control application.
In a scenario where the application core is in sleep mode, it can offload some of its tasks to a Cortex-M, such as the case of offloading networking functions.
The aim of the hybrid-runtime is to keep the same developer experience for deploying both a simple containerized application and an embedded application, in practice, this means the embedded part of the application can be deployed using a high level container runtime or orchestrator such as containerd or k3s while using the exact same command when deploying a normal containerized application, all we must do is specify which low-level runtime is required (runc or our hybrid-runtime)
For example:
containerd support for different runtime
Deploying using Runc:
ctr run --runtime io.containerd.runc hello_world:latest normal-container
Deploying using hybrid-runtime:
ctr run --runtime io.containerd.hybrid hello_world_imx8mp:latest hybrid-container
When we check what containers are running:
$ ctr container ls CONTAINER IMAGE RUNTIME hybrid-container hello_world_imx8mp:latest io.containerd.hybrid normal-container hello_world:latest io.containerd.runc
The hybrid-runtime is a low-level Open Container Initiative (OCI) compatible runtime.
There are 3 components that make up the runtime, the ones in green in the previous figure, and were all written in Rust:
The runtime implementation follows the OCI CLI specification requirements and as stated in the OCI specification, the runtime must provide a CLI that allows the user to create, start, delete, kill and print the logs of containers:
$ runtime-CLI [global-options] <COMMAND> [command-specific-options] <command-specific-arguments>
The CLI allows us, the user, to interact directly with the runtime. However, we can interact with the runtime using containerd or orchestration tools like k3s and not just with the CLI.
The runtime provides the core functionality of each of the previously mentioned commands. It uses the remoteproc framework to control (power on, load firmware, power off) remote processors from a Linux running on Cortex-A. It therefore brings the functionalities provided by remoteproc to orchestrators and cloud-native tools. The runtime also relies on RPMsg to retrieve logs from applications running on a remote processor from the main CPU. However, this assumes that the application running on the remote processor sends logging information through RPMsg. Since firmware are board, SoC and core specific, the runtime needs to map the right firmware to the right combination of board, SoC and processor. To tackle this, we label the firmware container image at creation time with the necessary details that the runtime uses to match against the available processors on the board.
A lightweight component that sits between the hybrid-runtime and containerd. It helps facilitate communication between them, handling tasks such as the container process management and keeping track of the container status.
Hybrid runtime high-level architecture
Link to the codebase and documentation can be found here https://github.com/smarter-project/hybrid-runtime.
We have created a learning-path that shows a detailed walkthrough of how to run a demo using the runtime and containerd or k3s https://learn.arm.com/learning-paths/embedded-systems/cloud-native-deployment-on-hybrid-edge-systems/.