This blog is part one in a three-part series. Part two and part three will be linked here when available.
Imagine a software development world where:
Sounds great right? All these benefits can be realized by strategically utilizing the software tools Jenkins and Docker alongside a continuous integration-oriented development flow. However, for embedded software developers this is not the end of the story. The introduction of testing and verifying code on physical hardware introduces serious limitations, such as slowing down development and testing time via limited boards, flash time, and maintenance. Leveraging a software-based, or virtual, model of hardware allows embedded software devs to enjoy all the advantages of a typical software dev by removing the clunkiness and hassle of physical hardware. Arm Fast Models bring this equivalent functionality, allowing for a 100% functionally accurate virtual model of Arm and custom IP.
Writing quality code targeting custom hardware systems is difficult enough; the tools enabling this development should be as streamlined as possible to avoid unnecessary complications. Leveraging continuous integration practices with Jenkins, Docker containers, and Arm Fast Models elegantly introduces a consistent and automated foundation to do what you do best: Drive the world forward.
This three-blog series will go through how to create this foundation by going through the following topics:
After reading through and leveraging the resources in this blog series, you and your team will be armed (pun intended) with the knowledge to create your customized consistent and automated software development workflow.
Let’s jump straight into part one.
Docker is a mechanism that isolates the dependencies for each application, or test, by packing them into containers. Each container runs an image that is designed to run a particular software app or apps, including all necessary libraries and dependencies. Images can be created from scratch, downloaded for free, and/or edited to fit your exact needs. Because Docker containers are portable (running on Mac, Linux, and Windows), teams can sync up and ensure their environments are the same, every time. Check out the Docker website for additional information.
Docker operates like a Virtual Machine (VM), but instead of each container having its own OS those resources are shared among Docker containers, allowing applications to be packaged with only what they need to run—no more, and no less. This enables containers to be much more lightweight, portable, and reusable than VMs.
The first step in using Docker is to install Docker…the correct type of Docker. Checking out the Docker website there are various options available depending on your host OS and end goals. Most clearly there is a choice between the Community Edition vs Enterprise Edition. Here is a breakdown of both, from the docker docs:
Community Edition more than sufficient to customize an environment for continuous integration purposes.
The next choice is the different options for OSs. As with most enterprise-like software it seems to be much easier to work with on Linux, but I am working perfectly fine with Docker on a Windows host. On the Docker site you can download the correct Docker for your OS with options for Mac, Windows, and Linux distros, as well as Docker installs optimized for specific cloud services. For Windows and Mac users there is another choice to make: Installing native Docker vs what is called ‘Docker Toolbox’. Here is a summary of the differences:
Hardware required @ time of blog
Docker for Windows
Hyper-V (native virtualization)
Windows 10 Pro or Enterprise (not Home)
Oracle VM VirtualBox
It is certainly possible to install Docker in a Linux VM on a Windows host if this fits your development flow better. I am personally using Docker for Windows, Community Edition. While the install process will be different and slightly easier on Linux hosts, the development flow after installation will be largely identical. For install instructions, Docker provides up to date and helpful installation guides on their site; for completeness' sake I’ll detail the Docker for Windows installation process here.
There are a few details that are important specifically to Windows installations of Docker, and are relevant at the time of this blog. The cats at Docker are actively developing Docker for Windows and as such the information in this installation section may become inaccurate or outdated as time goes on. Check in with their installation page for the most recent information. Docker for Windows requires Microsoft Hyper-V to run, and the Docker installer will take care of that enablement. However, when this is enabled VirtualBox will no longer work (images will remain intact)--VirtualBox cannot work side-by-side with Docker for Windows. If you have a requirement to use VMs, simply install Docker inside your VM following the appropriate OS install instructions. Virtualization must be enabled (different than Hyper-V), which can be checked by going to the Performance tab on Task Manager.
Navigate to their installation page and select the ‘Stable’ build, which will download the Docker for Windows Installer executable. Run that and follow the on-screen instructions. You will need to give Docker privileged access to work properly managing Hyper-V VMs. After installation, start Docker by searching for ‘Docker for Windows’ in the Windows search bar. Opening for the first time Docker will pop up with a hello message and pointing to documentation, and upon subsequent computer resets Docker will start by itself. If Docker doesn’t seem to want to start and complains, try starting it manually by right clicking on the Desktop shortcut (or the program in the Windows search bar) and selecting ‘Run as Administrator’.
To ultimately run an application in a container, an image containing the required nuts and bolts must be created. An image is created by writing then building a ‘Dockerfile’. The best practices behind creating this extensionless file are extensive, and the documentation for each command available is informative but long. Here I’ll provide the Dockerfile used to create the image used in this example and walk through each step. See the .zip file attached at the end of this blog to run the example yourself. In addition to the startup material provided here, download an evaluation license for Fast Models below (click on the 'Evaluate for Linux' button) and place the Fast Model .zip file in the top level directory of the file from this blog. The Fast Model .zip file should be in the same directory as the Dockerfile provided (no need to unzip the Fast Models file, that will be automatically extracted in the Docker image).
Download evaluation license for Fast Models
Here is the Dockerfile:
# Install packages
RUN apt-get update && apt-get install -y apt-utils
RUN apt-get install -y \
# Create new user with GID and UID of jenkins #RUN useradd --create-home --shell /bin/bash jenkins
RUN mkdir --parents /home/jenkins &&\
groupadd --system jenkins &&\
useradd --system --home /home/jenkins --shell /sbin/nologin --gid jenkins jenkins
# Install FMs
ADD FastModels_11-4-043_Linux64.tgz $JENKINS_HOME/
RUN cd $JENKINS_HOME/FastModels_11-4-043_Linux64/ && ./setup.sh --i-accept-the-license-agreement --basepath "$JENKINS_HOME/Arm/" &&\
rm -r $JENKINS_HOME/FastModels_11-4-043_Linux64/
# Set License file path
# Setup example FM system
COPY ./m4_system/ $JENKINS_HOME/m4_system/
COPY ./ITMTrace/ $JENKINS_HOME/plugins/
COPY ./run_m4.py $JENKINS_HOME
RUN . $JENKINS_HOME/Arm/FastModelsTools_11.4/source_all.sh &&\
cd $JENKINS_HOME/m4_system/model/ &&\
# Set FM startup sourcing for manual code work
RUN echo "\n#FM Startup Code\n" >> $JENKINS_HOME/.bashrc &&\
echo "source $JENKINS_HOME/Arm/FastModelsTools_11.4/source_all.sh\n" >> $JENKINS_HOME/.bashrc
# Switch to jenkins user with proper rights to files in $JENKINS_HOME
RUN chown -R jenkins:jenkins $JENKINS_HOME
Every Dockerfile must start with a ‘FROM’ command, specifying what the image is built from. Here it is a custom image I created that has Arm tools pre-installed on a minimalist Ubuntu 16.04 image (which is pulled from Docker Hub). To ensure proper image security a new user with limited privileges is created, with the group name and user name of 'jenkins'. Fast Models is then installed, which involves adding the tarball to the Docker image (the 'ADD' command automatically untars compressed files) and running the install script setup.sh with relevant parameters. Upon further updates of Fast Models, the path names to the Fast Models tarball and subsequent location in the Dockerfile must be updated to match the version and release numbers specific to your downloaded version.
NOTE: Before installing from the command line with the "--i-accept-the-license-agreement" option, you will be required to agree to be bound by and automatically accept the included the terms and conditions of the relevant Arm End User License Agreement (EULA) and agree to the terms and conditions detailed therein. This condition applies to installation and use of any product updates or new versions of the product will be subject to as the terms and conditions of the relevant Arm EULA that applies at the time of install.
The license file must be referenced next to ensure the example model can build properly. Replace 'your_file_location_here' with the network path to your license. Make sure there is no space between the '=' and your file location. The example code is then copied over and the Fast Model virtual platform is built on the Docker image. The required Fast Model scripts are set in the user's bashrc file, invoking them automatically upon manual running of the image. Finally the user is switched to jenkins, with rights to files & folders in that home directory, and the license file environmental variable is set.
As a general note on common Dockerfile syntax, note that the ‘\’ character denotes a multi-line command, and the ‘&&’ indicates that there is another command to run after the current one finishes.
Running the following command in your terminal or command prompt will build a Docker image based on that Dockerfile:
docker build -t zach/fm-m4-example-itm:latest -f Dockerfile .
The -t specifies the ‘tag’ for the image, -f points to the Dockerfile, and the ‘.’ at the end specifies the build context—what files to look at when building the image. Because it is a period '.' the build context is the current directory. Sub-directories can be named here if needed, but not in this case. The COPY command must point to a file or directory within the noted build context. After building, check that the image was created with the ‘docker images’ command.
To run the docker container, run the following command:
docker run --rm -ti --cap-drop=all --memory=2G --cpus=1 zach/fm-m4-example-itm:latest
The extra commands after ‘docker run’ allow for, in order, proper clean-up (--rm), opening an interactive shell (-ti), restricting permissions of the container even further for security ('--cap-drop=all'), and restricting the max resource usage the container can use at one time (--memory=2G and --cpus=1) for security reasons. Lastly is the pointer to the correct image to run (zach/fm-m4-example:latest). Upon running, the Docker image will start in the home directory of the user jenkins. The command prompt should look like this:
With Arm Fast Models helpfully already setup and configured in the Docker image, the next step is to run an example program on a Cortex-M4 system. I created a simple python script to automate the process, using the Fast Model scripting language PyCADI:
# Import libraries
# Set python path to Fast Models, as a check to see if FM installed properly
sys.path.append(os.path.join(os.environ['PVLIB_HOME'], 'lib', 'python27'))
except KeyError as e:
print "Error! Make sure you source all from the fast models directory."
sys.exit(1) # Exit with error
targets = model.get_target_info()
for target_info in targets:
if target_info.target_name.find("ITMtrace") >= 0:
target = model.get_target(target_info.instance_name)
target.parameters["trace-file"] = file_name
jenkins_home = os.environ['JENKINS_HOME']
plugin_path = str(jenkins_home)+"/plugins/ITMtrace.so"
model_path = str(jenkins_home)+"/m4_system/model/cadi_system/cadi_system_Linux64-Release-GCC-5.4.so"
app_path = str(jenkins_home)+"/m4_system/app_helloWorld/startup_Cortex-M4.axf"
out_path = str(jenkins_home)+"/output.txt"
# Set Environmental variable
os.environ["FM_TRACE_PLUGINS"] = plugin_path
# Load model
model = fm.debug.LibraryModel(model_path)
# Get cpu
cpu = model.get_cpus()
# Load app onto cpu
# Send ITM to stdout
# Run the model, exit after timeout.
For more information on PyCADI, see my previous blog on the topic.
When we run this python script in the created container, a new file called output.txt0 (with the '0' at the end indicating that ITM channel 0 was the used channel) should be generated with some welcome messages. Here are the commands I ran and their respective outputs:
The file output.txt0 is created after running the test, which contains a welcome message and the traditional ‘Hello World’. In this case, the file output.txt0 being created indicates that the ‘hello world’ application ran successfully, which is verified by the command ‘head output.txt0’ returning the contents of the generated file. One can imagine a more complicated application generating multiple files based on the some given input, or any other test application that is used to verify code integrity.
The overall goal of this three-part blog series is to create a consistent and automated embedded software development foundation; in this article the consistent development platform of Docker was introduced. Sharing one Docker image creates a reliable and consistent development environment within teams, eliminating the many problems that arise when developing on different host OSs and even with the same OS on different machines with different dependencies. To get the most benefit out of the Docker platform automation needs to be introduced. With Docker working properly and an example test case in hand the next step is to automate the process of running this test, with other added benefits such as version control management. This is where Jenkins comes in, which will be set up from scratch to a working example in the next part. For the code and tools responsible for the content in this blog, see the files below.
Continue on to part two when available.