Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Research Collaboration and Enablement
Research Collaboration and Enablement
Research Articles SMARTER: A smarter-cni for Kubernetes on the Edge
  • Research Articles
  • Arm Research - Most active
  • Arm Research Events
  • Members
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
Research Collaboration and Enablement requires membership for participation - click to join
More blogs in Research Collaboration and Enablement
  • Research Articles

Tags
  • Arm Research
  • Internet of Things (IoT)
  • Software and Services
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

SMARTER: A smarter-cni for Kubernetes on the Edge

Chris Adeniyi-Jones
Chris Adeniyi-Jones
April 9, 2020
10 minute read time.

The decreasing cost and power consumption of intelligent, interconnected, and interactive devices at the edge of the internet are creating massive opportunities to instrument our cities, factories, farms, and environment to improve efficiency, safety, and productivity. Developing, debugging, deploying, and securing software for the estimated trillion connected devices presents substantial challenges. As part of the SMARTER (Secure Municipal, Agricultural, Rural, and Telco Edge Research) project, Arm has been exploring the use of cloud-native technology and methodologies in edge environments to evaluate their effectiveness at addressing these problems at scale. 

The Container Network Interface (CNI) defines a common interface between container runtimes and network plug-ins. The CNI is used to manage the allocation and deallocation of network resources to containers as they are created and deleted. 

Container orchestration frameworks, such as Kubernetes, can use a CNI to deploy containers into a Kubernetes cluster, while remaining independent of the implementation details and topology of the network used between the machines in the cluster. 

 A diagram detailing standard Kubernetes

There are many choices available which can be used as CNI plug-ins with a large variation in the sophistication of functionality provided. In the most common setup of Kubernetes clusters, it is desirable that every node is reachable from every other node in the cluster. This enables the seamless deployment of applications and services across the nodes within the cluster. The CNI is responsible for ensuring that the containers created are reachable from every node and can use a range of technologies to enable this, for example building an overlay network using VXLAN.

 A comparison between IoT Edge, and IoT with Edge Compute

In our use-case, we have chosen a different network organization from the Kubernetes norm, as we are looking to enable Edge Compute for IoT. In the ’IoT Edge’ use-case, we have various IoT endpoints connected to an Edge Gateway. The IoT endpoints do not have their own IP connection - instead they communicate with the Edge Gateway. In the simplest case, the Edge Gateway acts as a relay, passing on data from the endpoints (for example, data from a temperature sensor) to a cloud-based application and passing back commands from that cloud application. In this setup, many endpoints may be connected to a single Edge Gateway. This combination of endpoints plus gateway is then the unit of deployment that can be instantiated in many different locations.

In the ‘IoT with Edge Compute’ use-case, we take advantage of the compute available in the Edge Gateway to run applications there that can provide a lower-latency response to data from the IoT endpoints. This can also have benefits in terms of privacy and data-security as we also can reduce the amount of raw data being sent to the cloud.

We chose to use Kubernetes to manage the deployment of applications to the Edge Gateways in our system. Each of our deployed Edge Gateways becomes a node in our Kubernetes cluster, but unlike a normal cluster we have no requirement for each node to be reachable from every other node.

In this typical IoT edge computing implementation, the system is segmented with the control plane (master) running in the cloud, while the Edge Gateways (the worker nodes) themselves are scattered geographically and are probably located behind a firewall or behind a NAT in a private network.

In this model, the connectivity between nodes and that between a node and the cloud are limited. We assume that nodes have an outbound internet connectivity and they can initiate a connection to the hosted Kubernetes master in the cloud. Furthermore, the nodes themselves are not connected to other nodes - an application running on an Edge Gateway does not communicate directly with applications running on other Edge Gateways.

 Cloud-based k8s Master and Cloud-based Application

We took these restrictions into account when designing smarter-cni for our IoT with Edge Compute scenario. When using smarter-cni, only pods (containers) running on the same node can communicate directly with each other.

Our starting point for smarter-cni is the “to_docker” example CNI plug-in that is available in the Kubernetes source repository.

Kubernetes source repository 

This plugin is no longer maintained and so we have made the smarter-cni plugin.

Smarter-cni plugin 

smarter-cni details

Networking

The networking configuration for a node (Edge Gateway) using smarter-cni can be viewed in two ways:

  • External view: the physical network interfaces (ethernet, wifi, cellular, and so on) on the node is managed by the network that each interface is connected to. The system makes no assumptions about the IP addresses provided or DNS names for the node. It is expected that at least one interface provides access to the Internet so that the node can connect to the cloud-based Kubernetes master. We assume that the external interfaces of the node will be externally configured by DHCP, BOOTP, and so on.
  • Internal view: smarter-cni uses a docker user-defined network to which all the Kubernetes pods are connected VIA virtual interfaces (only pods that use host networking do not have a virtual interface). Each deployed pod has an interface allocated from this user-defined network, receiving an allocated address from within the range of the user-defined network.

 smarter-cni node networking diagramDNS

In a standard Kubernetes cluster, DNS is centralized, and Kubernetes Service objects are used to provide a mapping between IP addresses and pods. This mechanism also supports load-balancing and proxying. Smarter-cni provides a simpler implementation that is less expensive and more distributed, it also alleviates the need for Service objects by providing a DNS entry for each pod.

Docker provides an automatically enabled, embedded DNS resolver (127.0.0.11) for user-defined networks. When a Kubernetes pod is started on a node, smarter-cni captures the Kubernetes pod name and creates a DNS record in the embedded DNS server. It is this mechanism that enables pods running on the same node to discover each other's IP addresses VIA DNS lookup of their names. Each node also runs a containerized dnsmasq connected to the user-defined network with a static address. Pods using host networking are configured to look up DNS entries VIA this dnsmasq and can therefore also discover IP addresses VIA DNS lookup of pod names (which would not normally be possible as host networked pods cannot access the embedded DNS resolver directly).

Installation 

To install smarter-cni on a node, check out the latest tagged version (currently v0.5.1) from the smarter-cni repository. 

Smarter-cni repository 

In the smarter-cni repository, you find:

  • Shell-scripts that implement the CNI plug-in
  • install.sh - a shell script that:
    • Copies the CNI plug-in components into the correct places
    • Instantiates a docker user-defined network and starts the dnsmasq container.
  • A README.md file that describes the installation process in more detail.

Using smarter-cni with k3s

Once smarter-cni is installed on a node, it can be used as the CNI when the node is joined to a Kubernetes cluster. In our ‘IoT Edge Compute’ setup we do not run the Kubernetes kube-proxy or core-dns pods or services - they are used to provide cross-node (that is, cross-Edge Gateway) functionality that we explicitly do not support.

Here is an example of using smarter-cni with k3s with docker as the container runtime engine (we assume that docker is already present). We run the k3s server on one node ‘the master’ and the k3s agent on another node ‘the worker’.

On the Master Node: 

Install smarter-cni as specified above.

Download the latest k3s binary. Both 64-bit and 32-bit Arm platforms are supported as well as x86. Install k3s as `/usr/local/bin/k3s`

Start the k3s server on the master node using:

/usr/local/bin/k3s server --docker --no-flannel --no-deploy coredns --no-deploy traefik --disable-agent > server.log 2>&1 &

This will start the k3s server using docker as the container runtime engine and switches the CNI from the default (flannel) to that specified in the /etc/cni/net.d directory. This command also prevents coredns and traefik being deployed as we do not use that functionality. This command will generate logging information so it's best to redirect standard error and standard output to file as shown

Note that in this setup the master node is not running the k3s agent and will therefore not run any applications that are deployed into the cluster.

Find the token that a worker will need to join the cluster. This is located at /var/lib/rancher/k3s/server/node-token on the master node.

For example:

cat /var/lib/rancher/k3s/server/node-token

K1093b183760bf9caa3d3862975cfdc5452a84fe258ee672d545dd2d27900045162::node:a6208aefd1e9bf2644b0c7eb10a76756

On the Worker Node:

Install smarter-cni as specified above.

Download the latest k3s binary. Both 64-bit and 32-bit Arm platforms are supported as well as x86. Install k3s  as `/usr/local/bin/k3s`

Put the token from the master into an environment variable on the worker node:

export TOKEN="K1093b183760bf9caa3d3862975cfdc5452a84fe258ee672d545dd2d27900045162::node:a6208aefd1e9bf2644b0c7eb10a76756"

Run the k3s agent on the worker node filling in the IP of the master node and providing the token:

k3s agent --docker --no-flannel --server=https://${IP_ADDRESS_OF_MASTER_NODE}:6443 --token ${TOKEN} > worker.log 2>&1 &

This will start the k3s agent and join the worker to the cluster. This command will also generate logging information so it's best to redirect standard error and standard output to file as shown.

On the Master Node:

Now on the master node you should be able to see the state of the cluster using:

/usr/local/bin/k3s kubectl get nodes -o wide

which should produce output like:

NAME  STATUS ROLES  AGE  VERSION       INTERNAL-IP EXTERNAL-IP OS-IMAGE                       KERNEL-VERSION CONTAINER-RUNTIME
pike2 Ready <none> 4d1h v1.16.2-k3s.1 10.2.14.69 <none> Raspbian GNU/Linux 10 (buster) 4.19.75-v7+ docker://19.3.5

The ”Ready” status shows that the worker node has joined the cluster correctly.

The same k3s agent command can be run on other nodes (on which smarter-cni and k3s have been installed) to add more nodes to the cluster.

NAME  STATUS ROLES  AGE VERSION       INTERNAL-IP EXTERNAL-IP OS-IMAGE                       KERNEL-VERSION CONTAINER-RUNTIME
pike2 Ready <none> 47h v1.16.3-k3s.2 10.2.14.69 <none> Raspbian GNU/Linux 10 (buster) 4.19.75-v7+ docker://19.3.5
pike1 Ready <none> 47h v1.16.3-k3s.2 10.2.14.53 <none> Raspbian GNU/Linux 10 (buster) 4.19.50-v7+ docker://18.9.0

Running an Application

Here is a YAML description for an example application that can be deployed to the cluster. It's described as a Kubernetes dameonset and will be deployed on each node in the cluster:

kind: DaemonSet
apiVersion: apps/v1
metadata:
name: example
labels:
k3s-app: example
spec:
selector:
matchLabels:
name: example
template:
metadata:
labels:
name: example
spec:
hostname: example
containers:
- name: example-dummy-pod
image: alpine
command: ["/bin/ash", "-ec", "while :; do date; sleep 5 ; done"]

This application consists of a shell command running in an Alpine Linux image. It prints the current date and time onto standard out every five seconds.

To deploy: put the YAML description into a file and then apply it to the cluster (on the Master Node)

k3s kubectl apply -f example.yaml
daemonset.apps/example created

The nodes may need to pull the Alpine docker image after which the application should start running:

k3s kubectl get daemonsets,pods -o wide
NAME. DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/example 2 2 2 2 2 <none> 85s example-dummy-pod alpine name=example

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/example-ksd9z 1/1 Running 0 85s 172.38.0.3 pike2 <none> <none>
pod/example-f6mvv 1/1 Running 0 85s 172.38.0.3 pike1 <none> <none>

You can use the k3s command to view the output from the application running. For example, looking at the output of a particular pod:

k3s kubectl logs pod/example-ksd9z
Fri Dec 6 15:56:39 UTC 2019
Fri Dec 6 15:56:44 UTC 2019
Fri Dec 6 15:56:49 UTC 2019

The application can be removed from all the nodes with a single command:

k3s kubectl delete daemonset.apps/example
daemonset.apps "example" deleted

Conclusion 

Smarter-cni is designed for the particular use-case of Edge Compute for IoT. Using smarter-cni as the CNI for Kubernetes cluster reduces the complexity of the networking setup while still supporting many important Kubernetes properties. This is a common theme in the SMARTER project - applying cloud-native technologies and adapting them where the constraints require it. Networking is just one part of the story for making Edge Compute for IoT a reality. Look out for the upcoming blogs about different aspects of the SMARTER project for more information, and should you have any questions, contact Chris - Principal Research Engineer at Arm Research.

Contact Chris Adeniyi-Jones 

This post is the first in a five part series. Read the other parts of the series using the links below:

Part two: SMARTER: An Approach to Edge Compute Observability and Performance Monitoring

Part three: SMARTER: A Smarter-Device-Manager for Kubernetes on the Edge

Part four: SMARTER: Debugging a Remote Edge Device

Anonymous
Research Articles
  • HOL4 users' workshop 2025

    Hrutvik Kanabar
    Hrutvik Kanabar
    Tue 10th - Wed 11th June 2025. A workshop to bring together developers/users of the HOL4 interactive theorem prover.
    • March 24, 2025
  • TinyML: Ubiquitous embedded intelligence

    Becky Ellis
    Becky Ellis
    With Arm’s vast microprocessor ecosystem at its foundation, the world is entering a new era of Tiny ML. Professor Vijay Janapa Reddi walks us through this emerging field.
    • November 28, 2024
  • To the edge and beyond

    Becky Ellis
    Becky Ellis
    London South Bank University’s Electrical and Electronic Engineering department have been using Arm IP and teaching resources as core elements in their courses and student projects.
    • November 5, 2024