Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Tools, Software and IDEs blog Unifying Arm software development with Docker
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tags
  • Raspberry Pi
  • Cortex-A53
  • Embedded Linux
  • Docker
  • Cortex-A73
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

Unifying Arm software development with Docker

Jason Andrews
Jason Andrews
October 31, 2019
8 minute read time.

Earlier in 2019, Arm announced a strategic partnership with Docker to provide uniform software development and deployment across a variety of environments. We continue to work together to inform and educate software developers on how to take advantage of the available features to improve the software development process.  

As the Arm architecture proliferates across computing there are new ways to do software development. The traditional process of embedded software engineers cross-compiling C/C++ (and some assembly) on a Windows or Linux machine for an Arm target board is quickly changing. Cross compile and copy are being replaced by native compile and Docker containers. 

At Arm TechCon 2019, Arm and Docker held a workshop with hands-on content for attendees to gain experience using Docker for Linux C/C++ application development. One flow which generated significant interest was the ability to use Docker Desktop on Windows or Mac and build a C/C++ application in a Docker container on a remote Arm server. The build looks like a native compile and results in shorter build time compared to instruction translation on Docker Desktop. As Arm servers and cloud instances continue to increase performance and availability, they provide the best of both worlds for C/C++ compilation - native compilation and high performance. Docker makes the remote build almost transparent. TechCon workshop attendees couldn’t easily tell the difference between compiling on their laptop versus an AWS A1 instance. 

Let’s look at Docker buildx in more detail and see how to use it for building multi-architecture Docker images on remote Arm machines. 

If you are new to buildx for Arm, start with the articles, Getting started with Docker on Arm and Getting started with Docker for Arm using buildx on Linux. This article assumes you understand the basics of buildx and are looking to create Docker images using Arm machines on a local network or in the cloud. 

Local build review  

Docker buildx starts with the builder instance. The typical flow is to use “docker buildx create” to create a new builder instance and “docker buildx use” to set a builder instance as the default. Typical examples utilize these commands. 

$ docker buildx create –name mybuilder 
$ docker buildx use mybuilder 

A handy command to check the status of builder instances and show which platforms a builder instance supports is the “buildx ls” command. 

$ docker buildx ls 

NAME/NODE    DRIVER/ENDPOINT             STATUS  PLATFORMS 
mybuilder *  docker-container                     
  mybuilder0 unix:///var/run/docker.sock running linux/amd64, linux/arm64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6 
default      docker                               
  default    default                     running linux/amd64 

The builder instance above will use instruction translation to transparently build and run Arm images on an x86 machine running Windows, macOS, or Linux. This works great for many applications, but for larger C/C++ projects used in IoT or other embedded Linux systems, the compile time will be longer than cross-compiling. To get the best of both worlds, native compilation and high performance, let’s create a remote builder which is an Arm machine.  

There are multiple ways to setup build instances on other machines. Today, I’m going to cover two of them.  

  • Using an unencrypted TCP socket to a remote docker daemon 
  • Using an ssh connection to a remote docker daemon

The unencrypted TCP socket is suitable for use on a local network only. There is also an option for a secured, https encrypted socket which is not covered by this article. 

The example c-hello-world project can be used to experiment with the instructions below.

Remote builds on a local network 

Docker uses a client-server architecture where a Docker client talks to the Docker daemon. The daemon does the image building and container running. This allows the client (left side of the diagram below) to be on one machine, such as an x86 desktop, and the deamon (center of the diagram below) to be on another machine, such as an Arm server or cloud instance. 

Docker architectureFigure 1: Docker client server architecture

When docker is installed the docker daemon will automatically run, but the default connection is a Unix socket that can be accessed from the same machine. Let’s see how to enable an unencrypted TCP socket on a local Raspberry Pi.  

Below is a simple script to enable the TCP socket. This can be run on a Raspberry Pi or an Ubuntu machine. The script sets up the Docker daemon to enable a TCP connection on port 2375. This is an unencrypted connection and only suitable for a local network connection. 

#!/bin/bash 

# Enable remote docker daemon 

sudo mkdir -p /etc/systemd/system/docker.service.d 
sudo touch /etc/systemd/system/docker.service.d/options.conf 

echo "[Service]" | sudo tee -a /etc/systemd/system/docker.service.d/options.conf 
echo "ExecStart=" | sudo tee -a /etc/systemd/system/docker.service.d/options.conf 
echo "ExecStart=/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2375" | sudo tee -a /etc/systemd/system/docker.service.d/options.conf 

# Reload the systemd daemon. 
sudo systemctl daemon-reload 

# Restart Docker. 
sudo systemctl restart docker 

After restarting the daemon, move back to the x86 machine (Windows, macOS, or Linux) to create a remote builder instance on the Raspberry Pi. If it’s running a Raspbian operating system, then only armv7 is supported. This build will use port 2375 on the Raspberry Pi with the given IP address to perform the remote build. 

$ docker buildx create --use --platform linux/arm/v7  --name pi1  pi@192.168.0.165  
$ docker buildx use pi1 
$ docker buildx build --platform linux/arm/v7 -t jasonrandrews/c-hello-world-pi --push . 

This works fine to build an image directly on the Raspberry Pi. Running a 64-bit Linux on the Pi would enable both Armv7 and Armv8 images to be built, but the performance is likely still slower than cross-compiling a large C/C++ project, even if the new Raspberry Pi 4 is used.  

Next, let’s look at using ssh to use an AWS A1 instance to build C/C++ projects. An A1 instance provides the possibility to use a higher performance machine with a higher CPU count. The a1.4xlarge instance has 16 CPUs, ideal for the parallel compilation of large C/C++ projects.   

Remote builds using ssh 

An easier, and more secure way, to do remote builds is using ssh to connect to a remote Docker daemon. 

The “buildx create” command can also take a “context” instead of a machine name or IP address. A Docker context can be provided to buildx create to make a new builder instance that uses ssh. 

Provide the username and IP address of an AWS A1 instance which is accessible via ssh. 

$ docker context create a1-context1 --docker host=ssh://ubuntu@52.14.231.112 
$ docker buildx create --use --platform linux/arm/v7,linux/arm64 --name aws-builder-ssh  a1-context1
$ docker buildx build --platform linux/arm64,linux/arm/v7 -t jasonrandrews/c-hello-world-a1 --push .

Now the build will go to the remote machine over ssh and no extra configuration is required on the remote machine to set up the Docker daemon for the TCP access. 

Setting up ssh access without a password is also recommended to avoid being asked for the password for the remote builder instance. There are multiple ways to do this but search for articles about using ssh-copy-id to setup no password access. 

Creating a build farm 

Multiple machines can be combined into a “build farm” by appending additional contexts into a builder instance.  

$ docker context create a1-context2 --docker host=ssh://ubuntu@52.14.231.114 
$ docker context create a1-context3 --docker host=ssh://ubuntu@52.14.231.121 

$ docker buildx create --use --name aws-farm a1-context1 
$ docker buildx create --append --name aws-farm a1-context2 
$ docker buildx create --append --name aws-farm a1-context3 

Results 

To understand the significant benefits for C/C++ application development for Arm, let’s compare the build times for various machines. To do this I compiled the Arm Compute Library as described in one of the steps for the machine learning example running AlexNet with the Arm Compute Library. Compiling the Arm Compute Library is computationally intensive and takes advantage of parallel compilation to shorten compile time on machines with more CPUs.  

On each machine the number of CPUs was obtained using the command below, multiplied by two, and passed to scons with the -j option.

$ grep -c ^processor /proc/cpuinfo 

The results here are not meant to be a comprehensive benchmark of machine performance, but a rough estimate of the differences between machine types and ways to compile. 

Computer 

Number of CPUs 

Compile time 

x86 Linux Laptop cross-compile 

4 core i7 

about 13 min

Arm Acer R13 Chromebook 

4 core Cortex-A73/Cortex-A53 

about 30 min

Raspberry Pi 4 

4 core Cortex-A73 

about 30 min

x86 Windows Laptop with Docker and instruction translation 

4 core i7

about 3 hours

AWS a1.4xlarge  

16 core Graviton 

about 13 min

 

The results demonstrate the power of using buildx to create multi-architecture images on a remote Arm machine. The AWS A1 instance provides the same performance as x86 cross-compile and the additional benefit of looking like a native compile to the build environment. Maybe someday I will have a laptop with 16 or 32 Armv8 CPUs, but even if I do it’s likely an Arm server will still provide the fastest compile time. Using Docker buildx with a remote builder instance provides performance which is about the same as x86 cross-compile and eliminates cross-compile and copy.   

Docker Desktop on x86 is useful for many applications, but It's important to use the right solution for the problem and when C/C++ compilation is the bottleneck using an Arm server or cloud instance provides better performance and ease of use compared to cross-compiling, using instruction translation, and compiling directly on an embedded board or lower performance laptop.  

Wrap-up 

Docker significantly improves the software development environment for compiling and running C/C++ applications for Linux on Arm. Such applications are commonly found in IoT and embedded, and machine learning on Cortex-A is a perfect example. Previous methods of cross-compiling and copying files to a target board are likely to be replaced by containers and remote build servers and cloud instances.  

Appendix of helpful Docker commands

For those new to Docker, here is a summary of the most common commands. 

# See running containers: 
$ docker ps 

# See all containers, even exited ones: 
$ docker ps -a 

# See docker images: 
$ docker images 

# Remove a container: 
$ docker rm <CONTAINER ID> 

# Remove an image: 
$ docker rmi <IMAGE ID> 

# See the builders: 
$ docker buildx ls 

# Remove a builder (might hang so Ctrl-C and use above command to see if it’s gone) 
$ docker buildx rm aws-builder1 

# Remove all containers and images: 
$ docker system prune -a 

# Enable experimental features on the docker CLI for Linux use the environment variable (or put it in .bashrc) 
$ export DOCKER_CLK_EXPERIMENTAL=enabled 

# Enable experimental features on the docker daemon for Linux 
# Create the file /etc/docker/daemon.json with the contents: 
{ 
    “experimental”: true 
} 

Make sure to RSVP for the upcoming Docker Meetup on November 5, 2019.

View meetup details

Anonymous
Tools, Software and IDEs blog
  • What is new in LLVM 20?

    Volodymyr Turanskyy
    Volodymyr Turanskyy
    Discover what's new in LLVM 20, including Armv9.6-A support, SVE2.1 features, and key performance and code generation improvements.
    • April 29, 2025
  • Running KleidiAI MatMul kernels in a bare-metal Arm environment

    Paul Black
    Paul Black
    Benchmarking Arm®︎ KleidiAI MatMul kernels on bare-metal with AC6, GCC, and ATfE compilers.
    • April 17, 2025
  • Migrating a project from GCC to Arm Toolchain for Embedded

    Paul Black
    Paul Black
    Learn about migrating software projects to Arm Toolchain for Embedded in this blog post.
    • March 28, 2025