Containers Fundamental to Distributed Cloud Services

As operators implement their next generation networks, containers can help accelerate application deployment cycles and increase network agility, enabling the same microservices that ran in the datacenter to run at the network edge. These containers are lightweight, stand-alone runtime environments that eliminate inconsistencies by wrapping software up in a minimal filesystem, which contains only what it needs to run: code, runtime, system tools and system libraries.

In my last blog I talked about how networks were adopting SDN and NFV technologies to reduce costs and speed up deployment. NFV decouples network functions from the hosting platform and simply replaces dedicated hardware with the same application running as a virtual machine or container on a general-purpose server. This blog will focus on containers and the benefits they bring to NFV systems.

Minimal latency and high-performance vs. isolation and security in NFV systems

Containers provide several advantages over traditional virtual machines (VMs). First, isolation is done on the kernel level without the need for a guest operating system and hypervisor, so containers are lightweight. Typically, each container provides a single service.

Containers and Virtual Machines services

Figure 1: VMs and Containers

For example, in the case of an NFV-based vCPE platform, a lot of the services (e.g. routing, firewall security and virtual private network connectivity) can be split into many small parts (microservices that just do one thing) and chained together.

Deploying microservices in containers means it’s easy to add, reduce, or change the mix and ratio of services running from customer to customer. It also makes applications faster to deploy and easier to develop and update, because individual microservices can be designed, tested, deployed or updated quickly without affecting all the other services.

However, one of the disadvantages of containers compared to traditional VMs is security. VMs only share the hypervisor, which has less functionality and is less prone to attacks than the shared kernels in containers.

To address this, several companies are developing new container security models today; one initiative is Kata Containers, an open source project and community that is working to build a standard implementation of lightweight VMs that feel and perform like containers, but provide the workload isolation and security advantages of VMs.

Different types of container runtimes

Although Docker is the most popular container technology today, there many other container solutions out there:

Table 1 lists some of the top container runtimes:

Docker Core OS’ rkt
(now owned by Red Hat)
LXD Kata
Virtualisation technologies OS level OS level, Hypervisor OS level OS level / Hypervisor
Container format / Implementation OCI CNCF OCI CNCF OCI Canonical OCI Openstack
Supported platforms Linux, Windows, macOS, Microsoft Azure, AWS) Linux, Windows, macOS Linux   Linux
Programming language Go Go Go Go

Table 1: Container Types

Several industry initiatives are standardising container-related technology to avoid vendor lock-in and enable developers to pick the best of breed for their platform. For example, the Open Container Initiative (OCI), which is run by the Linux foundation, creates an open industry standard around container formats and runtime and most containers comply with this standard today.

The Cloud Native Computing Foundation (CNCF), also part by the Linux foundation, is home to many of the key container related projects, including Kubernetes and Prometheus. Its goal is to enable software developers to build better container platforms faster.

Go (often referred to as Golang), a programming language created at Google, underpins all the main container architectures. For the developer, Go makes it easy to package pieces of code functionality, and then build applications by assembling these packages. The packages can then be easily reused for other applications as well.

Containers and orchestration

Orchestration tools manage how multiple containers are created, upgraded, and made available. Orchestration also controls how containers are connected to build sophisticated applications from multiple, microservice containers.

Kubernetes, Mesos/Marathon, and Docker Swarm are some of the more popular options for deploying and managing containers.

  • Kubernetes is based on Google’s experience of running large scale container workloads for many years.
  • Swarm is Docker’s own container orchestration tool. It uses the standard Docker API and networking, making it easy to drop into an environment where you’re already working with Docker containers.
  • Mesos/Marathon predates Docker and is a container orchestration platform for Mesos.

Docker’s Swarm gives you the easiest route into orchestrating a cluster of Docker hosts. Kubernetes focuses less on the containers themselves and more on deploying and managing services. While Kubernetes provides more flexibility, it comes at the expense of simplicity. Finally, Mesos (with Marathon) promises large scale but introduces further complexity.

Ecosystem

  Arm container ecosystem logos

Figure 2: Arm works with key open source projects across the container landscape

Within the Linux foundation, Open Platform for NFV (OPNFV) facilitates the development and evolution of NFV components across various open source ecosystems. It has a range of projects such as container4nfv that includes both container (e.g. docker) and orchestration (e.g. Kubernetes (k8s) or openstack+k8s or other variants) technologies needed to build NFV container platforms.

Major OEMs, operators, and hyperscalers will normally take an à la carte approach in building their platforms, where they may use for example Mesos for clustering/scheduling and docker for the container engine. However, smaller OEMs will normally pick a complete system such as docker.

Arm hardware technology enabling containers

Many Arm-based SoCs contain a large number of efficient Cortex-A cores, which makes them an ideal candidate for deploying container technology. Containers can be distributed across this hardware and spun up or shut down with ease.

Within the SoC, Arm’s NEON technology provides advanced SIMD (single instruction multiple data) architecture extension, which can be used for instance, to accelerate the underlying Go container language as well as speeding up the process of validating microservice image signatures. Image signatures provide a means of checking where a container image came from and that the image has not been tampered with.

NEON does this by executing operations in parallel by loading multiple pieces of data and performing an operation across all of them at once. NEON can be used multiple ways, including NEON-enabled libraries, compiler's auto-vectorization feature, NEON intrinsics, and finally, NEON assembly code.

Containers can also sit on top of VMs. In the infrastructure space, containers can run in virtualized environments to capitalize on the high availability, data persistence and security features.

Kata Containers, whose hypervisor is tuned for the lightweight aspects of containers, can benefit from Arm’s CPU hardware extensions, which accelerates Virtual Machine Manager (VMM) switching between VMs and hypervisor software.

Arm has also made continuous improvement to its System Memory Management Unit (MMU-600) and Generic Interrupt Controller (GIC-600) architecture to ensure optimal performance within a virtualised environment

Arm software optimisations

Arm works with various container/microservice runtime implementations on AArch64. For example, Kata Containers, Unikernel projects, and more. We also contribute to other container/docker related open source projects such as LinuxKit, Moby, Kubernetes and container4nfv.

  • These provides developers with the correct Arm build image for their given platform.

Arm also collaborates with other platforms to guarantee interoperability across different architectures (e.g. IBM) and to ensure they have access to the latest tools and hardware. This approach:

  • Reduces operational cost and complexity as networks will continue to have mixed environments for the near future.
  • Enables “best-of-breed” deployments as customers may have business requirements that can only be delivered with specific platforms.

Conclusion

We are still at the early days of containers in NFV systems. They offer the promise of:

  • High density, low latency, low cost and agile deployment.
  • A reduced memory and power footprint, which make them ideal for resource-constrained edge platform deployment.

Arm’s scalable multi-core technology is an ideal fit for well-designed containers.

In the infrastructure space, containers and VMs often co-exist and join forces to complete different NFV jobs. Containers allows for portability of applications, whereas VMs are designed to increase hardware utilization.

Arm continues to support both container and VMs open source projects to provide partners with choice and enable them to successfully deploy their NFV platforms

1. In the enterprise space, Red Hat recently acquired CoreOS and integrated it into its OpenShift product line to provide a more complete offering.

Anonymous