This blog is part 2 in a 3-part series. Part 1 and part 3 will be linked here when available.
This is the second part in a 3-part blog series that creates a consistent and automated software development foundation from scratch, enabling any team to adopt the development methodology. Continuous integration practices, Jenkins, Docker containers, and Arm Fast Models form this foundation, which naturally enhances code quality, bug elimination, and time-to-market by its structure. The building blocks are introduced in blog parts one and two, with the third part rearranging the pieces from a simple example to get started to a more realistic development pipeline layout.
In part one, Docker is installed and configured. A ‘hello world’ application runs in a custom Docker container on a virtual Cortex-M4. This code is considered to work correctly if an output.txt0 file is created with ‘Hello World’ printed inside, which was verified to be the case. In part two, Jenkins is set up and configured to automate the test developed in part one, effectively creating the verification and automation environment foundation to tailor to your exact development needs. If you missed part one, I recommend you view it to get the proper background and the required source files for this example.
[CTAToken URL = "https://community.arm.com/tools/b/blog/posts/implementing-embedded-continuous-integration-with-jenkins-and-docker-part-1" target="_blank" text="Part one" class ="green"]
Jenkins is an open source continuous integration server created to automate and integrate your entire development flow (build, test, merge), which can be configured to fit your needs--illustrated rather heavy-handedly by their butler logo. Originally developed under the name Hudson, Kohsuke Kawaguchi built an automation server to know—before committing code to one of Sun’s Java repositories—whether the code was going to work. In 2011 Jenkins spun off of Hudson, and while both continue to exist Jenkins is more popular in the CI world. Bamboo is another common tool, which is a commercial tool with Atlassian support. Choosing the right CI tool depends on your particular situation; I favor Jenkins due to its popularity and abundance of helpful plugins.
The next step towards continuous integration bliss is installing Jenkins. I decided to run Jenkins in a Docker container (having become cozier working with Docker), and the Jenkins documentation provides extensive information on how to accomplish this on their site if you encounter unexplained errors or would like more implementation details.There are many ways to install Jenkins, and instructions for installing in an Ubuntu 16.04 VM using the package manager ‘apt’ is also detailed here. While this blog will focus on two installation types, the post-install Jenkins usage applies to any Jenkins install implementation. I will also be leveraging Jenkins Blue Ocean, which is a graphical tool that greatly simplifies the continuous integration/delivery process and has a great user experience.
With Docker, installing Jenkins with Blue Ocean is a simple two step process for any OS:
docker run \ -u root \ --rm \ -d \ -p 8080:8080 \ -p 50000:50000 \ -v jenkins-data:/var/jenkins_home \ -v /var/run/docker.sock:/var/run/docker.sock \ jenkinsci/blueocean
Check that the 'Open Blue Ocean' option is on the left side of the browser when on localhost:8080, as it should be pre-installed with the Docker image. If not, see step 3 of the next section for instructions on how to get the Blue Ocean plugin.
Installing Jenkins through the Advanced Package Tool (apt) is a similarly simple process, especially if you do not want to or cannot host Jenkins within a Docker container. As an extra requirement, however, you must have Java installed for Jenkins to install and work properly, not required when installing through Docker. Type ‘java -version’ into your command-line; if java is not installed, install it with the following terminal command:
sudo apt install openjdk-8-jre
Once java is successfully installed, run these commands from a terminal to install Jenkins:
wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' sudo apt-get update sudo apt-get install Jenkins
Once this is complete, Jenkins should automatically start and is accessible through a web browser at the address localhost:8080.
To properly complete the installation two additional steps are required:
With the Jenkins up and running we can create the first pipeline.
A ‘pipeline’ refers to two things in this blog: (1) A CI pipeline and (2) A Jenkins ‘pipeline’. A CI pipeline I define as the sequence of steps in a CI flow. For example, a developer wanting to run some unit tests and verify they pass every time they merge code to version control. Their CI pipeline would be to build the software, test the software, then merge the software in version control. A Jenkins ‘pipeline’ is a suite of different plugins that enable Jenkins to script CI pipelines. Unless referring to these plugins, assume that the phrase ‘pipeline’ is referring to the CI pipeline concept.
A Jenkinsfile defines each step in the pipeline and can be managed and tracked in source control just like any other code. This enables CI best practices to be used with the CI pipeline configuration file itself! Blue Ocean provides an intuitive GUI interface to edit the pipeline code, the Jenkinsfile.
On the left of the Jenkins GUI is a link to ‘Open Blue Ocean’. Click on that and navigate to the ‘New Pipeline’ button if a prompt to create a new pipeline isn’t already front and center on the screen.
The next prompt is to select your source control repository, which is required for Jenkins to work properly. I suggest using SSH, the path where I’ve encountered the least resistance with both Jenkins and git. When you enter in your repository URL Jenkins will generate a public SSH key, which you can include in your own Git server.
Upon selecting ‘Create Pipeline’, a new screen will open which is the Jenkins Blue Ocean Pipeline editor. Depending on if you are using Docker to run the Fast Models, as detailed in the last blog part, or running locally on Linux there are two slightly different Jenkins pipelines to configure. Both steps and outputs are detailed here. Note that using Docker is excellent for maintaining environmental consistency among team members and prevents issues across the various people, environments and platforms on your teams.
This method will work regardless of Jenkins install implementation, through ‘apt’ on Linux or through Docker on any OS. Please complete the first part of this blog to ensure your docker environment is set up as expected. To add two steps in the pipeline--running the python script and checking its output--click on the plus sign icon to create the first test stage. Name it whatever you like, mine is called ‘Test’. Next, add the following two steps by clicking on the ‘add step’ --> ‘Shell Script’ buttons:
set +e && . //root//ARM//FastModelsTools_11.3//source_all.sh && set -e && python //root//FMs//run_m4.py &&
head //root//FMs//output.txt0
Because Jenkins will run the Docker container non-interactively, before we can run the Fast Model via python, the Fast Models tools must be sourced manually (typically done upon startup with .bashrc which does not apply as Jenkins does not run docker commands in a login shell).
Now at the bottom right of the screen, while selecting the ‘Test’ stage just created, select the ‘Settings’ button, set the Agent to ‘docker’ and the Image to zach/fm-m4-example:latest. Upon running the Jenkins pipeline that Docker image will be pulled and the shell scripts will run in the container created.
Each time a change is made on the Blue Ocean GUI, the underlying Jenkinsfile changes, which dictates the pipeline behavior at runtime. To view the Jenkinsfile code, use the hotkey CTRL-S. The code can be edited using this method as well, which is helpful when the Blue Ocean GUI doesn’t offer the necessary syntax option. However, adding too much code that isn’t represented by the Blue Ocean GUI can get confusing by keeping your system behavior ‘hidden’ from the BlueOcean GUI.
At this point the pipeline script is setup and ready to, and should look like this:
pipeline { agent none stages { stage('Test') { agent { docker { image 'zach/fm-m4-example:latest' } } steps { sh '''set +e && . //root//ARM//FastModelsTools_11.3//source_all.sh && set -e && python //root//FMs//run_m4.py''' sh "head //root//FMs//output.txt0" } } } }
If you do not want to or need to use Docker to run Arm Fast Model simulations, use the instructions here. Before proceeding, ensure that the Arm Fast Models are initialized on your host Linux system. See the previous blog for instructions on how to accomplish this.
First the requisite files are needed to run the test. Before setting up the Jenkins pipeline, download the files included in part one of this blog series and untar it in a directory in the same git repository this Jenkins pipeline is connected to, which will be referred to here as $BLOGBASEDIR. Rename the directory called ‘ITMtrace’ to ‘plugins’ or you will get an error that the python script cannot find the correct Fast Model plugin. Finally ensure that you commit the git repository with $BLOGBASEDIR included. This will allow Jenkins to see these files in the git repository.
The steps to run Fast Model simulations on the Linux host are as follows:
The steps will look like this:
Add these three steps by clicking on the ‘add step’ --> ‘Shell Script’ buttons and typing in:
set +e && . //home//zach/ARM//FastModelsTools_11.3//source_all.sh && set -e && cd $BLOGBASEDIR//m4_system//model && ./linux_build.sh
set +e && . //home//zach/ARM//FastModelsTools_11.3//source_all.sh && set -e && python $BLOGBASEDIR//run_m4.py
head $BLOGBASEDIR//output.txt0
Replace the paths to the Fast Models tools with the correct ones for your system; the ‘output.txt0’ file will be placed where the run_m4.py script is located, also in $BLOGBASEDIR.
To save and run the script, press ‘Save’ in the top right, followed by ‘Save & run’. This will send you to a new page which records current and past runs. Selecting the current run will show different stats, along with the overall results. Sometimes I’ve had to refresh the page to have results show up in a timely manner; this test takes ~10-20seconds to run on my laptop.
To save & run the script, press ‘Save’ in the top right, followed by ‘Save & run’. This will send you to a new page which records current and past runs. Selecting the current run will show different stats, along with the overall results. Sometimes I’ve had to refresh the page to have results show up in a timely manner; this test takes ~10-20seconds to run on my laptop. If you are building the virtual model on the Linux host machine as well (following steps for setting up the simulation on the Linux host, not in docker), then the test will take 1-2min. This screenshot shows a run completed from running on the Linux host machine, with the $BLOGBASEDIR being 'blog/' in this case:
The pipeline with name ‘PyCI’ ran its 24th test successfully (I tried this example a few times), and the expected output from the output.txt0 file is displayed. Important to note that if output.txt0 was not generated the Jenkins pipeline would stop at that step and give an error, which means that in this simple case whenever the pipeline passes our code can be considered correct (outputting some ‘hello’ message!). This type of test infrastructure can be extended to file existence checks, equivalence checks, content verification, customized test report analysis, code coverage, etc…there are many different techniques to verify that your software is running as expected and Jenkins supports a huge range of methods. This Jenkins pipeline can also be set up to be run before any commit to a certain git branch, can include merging branches when all tests pass, ensuring that certain branches are always working at a defined state. More of these techniques will be covered in subsequent blogs.
Leveraging Jenkins, Docker, and Arm tools for embedded software development saves time and increases quality by providing a consistent and automated CI workflow across a development team. The possibilities for streamlining software development with these tools are simultaneously expansive and approachable. In part three this simple example demonstrating the base functionality of the tools will be expanded to form a realistic use case. Learning to drive and play with a toy car is helpful to a point, but ultimately test driving a real car will drive the point home (pun intended). The final blog of this 3-part series will turn this toy car into a red Lamborghini.
To get the source files to run this example yourself, and to set up the correct Docker environment for both running Jenkins in a Docker container and simulations in Docker, see part one of this blog here. The link to part three, once available, will be put below.
Continue on to Part 3 when available.