The Marvell MacchiatoBin is an inexpensive Arm based networking and storage development platform. It contains quad-core A72 CPUs, USB 3.0 ports, SATA 3.0 ports, and a x4 PCI 3.0 slot. For network connectivity, it has 10GbE (SFP), 2.5GbE (SFP), and 1GbE (RJ45) interfaces. The board also supports Uboot and UEFI (Uboot is the default). Overall, We've found that this board is very useful for micro-service development work with cloud orchestrators like Kubernetes or Docker Swarm. This post will walk through how to setup the MacchiatoBin for working with these orchestrators.
Initial setup Instructions can be found on the MacchiatoBin Wiki page. As of the writing of this post, the Ubuntu setup instructions will use a custom kernel based on 4.4.x. Unfortunately, Docker (and thus, Kubernetes) will not work on this custom kernel. In fact, when the command apt install docker.io is executed, the installation will hang. Despite this issue, this initial setup is good for getting familiar with how to configure and boot the board. In fact, we used this initial setup to configure and build kernel v4.16 for running Kubernetes and Swarm.
apt install docker.io
The instructions below will show how to compile kernel v4.16 for the MacchiatoBin and couple it with Ubuntu Bionic.
Clone the kernel source and list all of the available tags.
cd ~ git clone https://github.com/torvalds/linux.git cd linux git tag
Checkout the desired tag (we built v4.16). Note, if the kernel will be built for a MacchiatoBin, upstream support starts at kernel v4.11.
git checkout v4.16
Create a default config.
make defconfig
This will be the starting point for configuring the kernel.
We'll use menuconfig to enable the features needed to run Kubernetes and Swarm. If menuconfig fails to run, the error logs tend to give good hints on what's wrong. Typically the problem is that the ncurses package is missing. On Ubuntu, this can be installed with apt install libncurses-dev.
menuconfig
apt install libncurses-dev.
make menuconfig
When the "GUI" appears, use it to include the following as either a compile-in or a module.
Save the config and quit menuconfig.
With this configuration, Kubernetes and Swarm will work. However, there can still be missing configs. To find these configs, there's a script in the Docker-ce source that can be executed. Let's walk through that next.
Clone the docker-ce repo.
cd ~ git clone https://github.com/docker/docker-ce.git
Run the check-config.sh script.
./docker-ce/components/engine/contrib/check-config.sh
This script will return a non-comprehensive list of kernel configs required by Docker. Make a note of all the configs listed, open menuconfig, search for the configs (press backslash to search), and enable any configs that aren't already enabled.
Build the kernel, device tree, and modules (-j4 is used below because the MacchiatoBin has 4 cores).
cd ~/linux make -j4 Image dtbs modules
After the build completes, the kernel will be at ~/linux/arch/arm64/boot/Image, and the device tree blob will be at ~/linux/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dtb.
~/linux/arch/arm64/boot/Image
~/linux/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dtb
The instructions below are similar to what's on the MacchiatoBin Wiki. However, A few more steps are added since we're working with newer versions of the kernel and Ubuntu.
cd ~ mkdir ubuntu_18.04 cd ubuntu_18.04 wget http://cdimage.ubuntu.com/releases/bionic/release/ubuntu-18.04-server-arm64.iso mkdir temp sudo mount -o loop ubuntu-18.04-server-arm64.iso temp/ ls temp/install/ sudo unsquashfs -d rootfs/ temp/install/filesystem.squashfs ls rootfs/
The Ubuntu Bionic RootFS will be present in ~/ubuntu_18.04/rootfs/.
~/ubuntu_18.04/rootfs/
Edit the file ~/ubuntu_18.04/rootfs/etc/passwd. Remove the 'x' in between 'root:' and ':0'. It should look like the below once the change is made.
~/ubuntu_18.04/
rootfs/etc/passwd
mcbin@buildserver:~/ubuntu_18.04$ cat ./rootfsetc/passwd root::0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin bin:x:2:2:bin:/bin:/usr/sbin/nologin [truncated output]
Ubuntu 18.04 uses Netplan for configuring network interfaces. Create the file ~/ubuntu_18.04/rootfs/etc/netplan/01-netcfg.yaml and place the following inside of it.
rootfs/etc/netplan/01-netcfg.yaml
network: version: 2 renderer: networkd ethernets: eth2: dhcp4: true nameservers: search: [XXXX] addresses: [YYY, ZZZ]
The nameservers block isn't always required. However, if the default search domain and addresses are known, enter them. Otherwise, remove this block.
nameservers
Note, eth2 is the 1GbE (RJ45) interface.
Copy the kenel image (Image) and device tree (armada-8040-mcbin.dtb) to ~/ubuntu_18.04/rootfs/boot/.
Image
armada-8040-mcbin.dtb
rootfs/boot/
sudo cp ~/linux/arch/arm64/boot/Image ~/ubuntu_18.04/rootfs/boot sudo cp ~/linux/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dtb ~/ubuntu_18.04/rootfs/boot
Install the modules.
cd ~/linux sudo make modules_install INSTALL_MOD_PATH=/home/user_name/ubuntu_18.04/rootfs/
Tar the RootFS with kernel, device tree and modules installed.
cd ~/ubuntu_18.04 sudo tar -cjvf rootfs.tar.bz2 -C rootfs/ .
At this point, the RootFS image is ready for untaring onto a storage device for booting. Refer to the MacchiatoBin Wiki for booting off a storage device. Note that the instructions focus on SD cards and USB storage devices, but it's easy to adapt these for booting off a SATA device. The Uboot base command for SATA devices is scsi as opposed to mmc and usb for SD card and USB storage devices respectively.
scsi
mmc
usb
As of the writing of this post, the latest network drivers has a bug with respect to the eth2 PHY. When Ubuntu boots, the Ethernet driver does not turn on the PHY. This will result in no network connectivity. The work around to get the PHY turned on is to force it on in Uboot before OS boot. To force it on, add the Uboot dhcp command at the beginning of the Uboot bootcmd variable. Instructions on how to configure the bootcmd variable is on the MacchiatoBin Wiki. Here's an example of what our bootcmd looks like for booting off a SATA disk.
eth2
dhcp
bootcmd
bootcmd=dhcp; scsi scan; scsi dev 0; ext4load scsi 0:1 $kernel_addr $image_name;ext4load scsi 0:1 $fdt_addr $fdt_name;setenv bootargs $console root=/dev/sda1 rw rootwait; booti $kernel_addr - $fdt_addr
Note, this is not an issue for the Marvell custom 4.4.x kernel.
Now that the upstream kernel is booting with Ubuntu, try a few tests to verify that Docker is functioning properly.
Launch a standalone container and run apt update to check network connectivity through the Docker bridge.
apt update
marvell@macchiato-0:~$ docker run -ti ubuntu:18.04 bash root@80e90b5d0a05:/# apt update Get:1 http://ports.ubuntu.com/ubuntu-ports bionic InRelease [242 kB] Get:2 http://ports.ubuntu.com/ubuntu-ports bionic-updates InRelease [88.7 kB] Get:3 http://ports.ubuntu.com/ubuntu-ports bionic-backports InRelease [74.6 kB] ... [Output Truncated]
The above confirms that basic features like cgroups, namespaces, and bridges are enabled in the kernel.
Test volumes. Note, volume mounts are located at /var/lib/docker/volumes/ on the host.
/var/lib/docker/volumes/
marvell@macchiato-0:~$ docker run --mount type=volume,source=myvol2,target=/test -ti ubuntu:18.04 bash root@5f9b3b734e81:/# echo "Hello Volume" > /test/volume_test root@5f9b3b734e81:/# echo "Hello Volume" > /test/volume_test root@5f9b3b734e81:/# exit marvell@macchiato-0:~$ sudo cat /var/lib/docker/volumes/myvol2/_data/volume_test Hello Volume
Test bind mounts.
marvell@macchiato-0:~/kernel_test$ docker run --mount type=bind,source="$(pwd)",target=/test -ti ubuntu:18.04 bash root@5b122fac08aa:/# echo "Hello Bind" > /test/hello_bind root@5b122fac08aa:/# exit exit marvell@macchiato-0:~/kernel_test$ cat hello_bind Hello Bind
Test tmpfs mounts. Note, tmpfs mounts do not persist after the container is destroyed. Thus, in the example below, the data is accessed from within the container.
marvell@macchiato-0:~/kernel_test$ docker run --mount type=tmpfs,destination=/test -ti ubuntu:18.04 bash root@76eb0c94125a:/# echo "Hello Volume" > /test/volume_test root@76eb0c94125a:/# cat /test/volume_test Hello Volume
Bridge networking was already tested above, but there are other networking configurations that should be tested as well. In particular, macvlan, ipvlan, host, and overlay networks. In the examples below, we will only test the macvlan and overlay drivers. These two examples should give a good picture of how to test the different networking drivers in Docker.
The macvlan driver allows a container's interface to appear as though it were directly connected to the physical network. This bypasses bridges and NAT to improve network performance. When a macvlan network is created, a subnet and gateway has to be supplied. These should match whatever the subnet and gateway is on the physical network. Later, when a container is attached to the macvlan network, an IP address from this subnet will get assigned to the container. The issue is, this IP address assignment doesn't happen with a DHCP request on the physical network. Instead, Docker assigns an IP address of its own choosing (or user supplied) to the container. This means the IP address assigned to the container could already be assigned to another device on the network via DHCP. In practice, a subset of IP addresses on the physical network needs to be reserved so that the DHCP server can't assign them. Then the macvlan network can be setup to assign addresses from this pool of reserved addresses in order to avoid conflicts with the DHCP server.
Nonetheless, below is a short test for the macvlan driver. We attach two containers to the macvlan network, and then have one of the containers ping the other.
ping
marvell@macchiato-0:~$ docker network create -d macvlan \ > --subnet=10.118.32.33/22 \ > --gateway=10.118.32.1 \ > -o parent=eth2 \ > my-macvlan-net d1573aed42e7b60578719089088882831c0d02e105a9b029581b55492bbabef0 marvell@macchiato-0:~$ docker run --network my-macvlan-net --name machine1 --ip 10.118.32.250 -dti alpine ash 9e44ee428f60a25f64b5503e2c566b6d257712dbd3c4e6bfe2e37853b74e7f65 marvell@macchiato-0:~$ docker run --network my-macvlan-net --name machine2 --ip 10.118.32.251 -ti alpine ash / # ping machine1 PING machine1 (10.118.32.250): 56 data bytes 64 bytes from 10.118.32.250: seq=0 ttl=64 time=0.141 ms 64 bytes from 10.118.32.250: seq=1 ttl=64 time=0.113 ms 64 bytes from 10.118.32.250: seq=2 ttl=64 time=0.110 ms ^C --- machine1 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.110/0.121/0.141 ms
For reference, overlay networks are discussed in detail in a previous post called Understanding Overlay Networks In Cloud Deployments.
Testing the overlay driver requires the initialization of a swarm. After the Swarm is initialized, an overlay network can to be created.
marvell@macchiato-0:~$ docker swarm init Swarm initialized: current node (pvrv8x6z33b4aw2r0zodnpu9k) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-2uc6ovsbsron14zt0soleczl4jujbeai0zjmf97tzch1at4g5w-5j94r3014hanwjbo7q8oo9x0h 10.118.32.33:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. marvell@macchiato-0:~$ docker network create --driver overlay my-overlay x2b5hhklyx0nckmxskts3xfvf
More nodes can be added to the Swarm with the docker swarm join command. However, for initial testing, it's best to test on a single node.
docker swarm join
Now that the swarm is initialized and we have an overlay network, let's attach containers and have one of them ping the other.
marvell@macchiato-0:~$ docker service create --network my-overlay --name service1 alpine sleep 10000 uqpg1k96knr3encl5jpyu9a1p overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service converged marvell@macchiato-0:~$ docker service create --network my-overlay --name service2 alpine sleep 10000 98oqzv9sirjo9c8ey44jkq2wz overall progress: 1 out of 1 tasks 1/1: running [==================================================>] verify: Service converged marvell@macchiato-0:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8835b45cc07e alpine:latest "sleep 10000" 29 seconds ago Up 29 seconds service2.1.93szlc13vbispnina94nkfy92 314b95f8aebb alpine:latest "sleep 10000" 40 seconds ago Up 39 seconds service1.1.zjg60d7m53mw3tej46rjo1nmk marvell@macchiato-0:~$ docker exec -ti 314b95f8aebb ash / # ping service2 PING service2 (10.0.0.5): 56 data bytes 64 bytes from 10.0.0.5: seq=0 ttl=64 time=0.097 ms 64 bytes from 10.0.0.5: seq=1 ttl=64 time=0.127 ms 64 bytes from 10.0.0.5: seq=2 ttl=64 time=0.121 ms ^C --- service2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss
Now that our basic Swarm is confirmed to be working, our testing can be extended by adding more nodes to the Swarm and by enabling overlay encryption (add the --opt encrypted flag to the docker network create command), but we'll stop here.
--opt encrypted
docker network create
Given that the above tests were successful, we now have confidence that our kernel can support Docker Swarm.
In our experience, if Docker Swarm is functional, then Kubernetes will be functional too. This is because the various networking solutions that can be deployed with Kubernetes will rely on the same kernel features that Swarm relies on. We won't cover it in this post, but we have successfully deployed containers with Kubernetes on the MacchiatoBin. In fact, we've deployed the micro-services demo discussed in Cloud Management Tools On Arm on the MacchiatoBin with Kubernetes.
We've shown how to setup a MacchiatoBin development board with an upstream kernel that is configured to run Kubernetes and Swarm. This transforms the MacchiatoBin from a networking and storage development platform, to a cloud computing development platform. It's inexpensive and great for getting involved in the Works on Arm project. This project aims to further develop the Arm Software ecosystem in the data center. The idea of the Works on Arm project is to take Open Source projects, deploy them on Arm platforms, and debug any functional and performance issues found in the process. All it takes to get started is to build and run the SW as intended by its developers. We'd like to encourage readers to get involved with this project.
[CTAToken URL = "https://www.worksonarm.com/" target="_blank" text="Works on Arm Project" class ="green"]
Update:Some people are reporting not being able to boot using upstream kernel source. An alternative code base that appears to work well is the Marvell Linux kernel source located here:
https://github.com/MarvellEmbeddedProcessors/linux-marvell
As of this post, I see kernels based on 4.14 which are not the absolute latest, but still very update to date.
The DTB is different between kernels. But that was not the problem here. When the (bigger) kernel got loaded it would overwrite the DTB. Instead of using 0x4f00000 as memory address I would load the DTB on a different address in RAM and then it would work just fine.
The device tree blob should be the same regardless of which kernel you're using. I would imagine you can use the same device tree you used with your older kernel on the new kernel.
Thanks for this step by step howto. I'm trying to get a newly build linux kernel (4.19) to boot. However, u-boot prints the following:
ERROR: Did not find a cmdline Flattened Device Tree Could not find a valid device treeThe u-boot env regarding fdt:
fdt_addr=0x1000000 fdt_addr_r=0x4f00000 fdt_high=0xffffffffffffffff fdt_name=boot/armada-8040-mcbin.dtb fdtcontroladdr=7f70ac38
The dtb that works:
armada-8040-mcbin.dtb: Device Tree Blob version 17, size=36904, boot CPU=0, string block size=1816, DT structure block size=35032
The dtb that doesn't work:
armada-8040-mcbin.dtb: Device Tree Blob version 17, size=27203, boot CPU=0, string block size=1087, DT structure block size=26060
Do I need to adjust the fdt_* and fdtcontroladdr environment settings? If so, how would I obtain the correct values?
The roots works fine with the old kernel though.