Implementing Embedded Continuous Integration with Jenkins and Docker: Part 2

This blog is part 2 in a 3-part series. Part 1 and part 3 will be linked here when available. 

Intro

This is the second part in a 3-part blog series that creates a consistent and automated software development foundation from scratch, enabling any team to adopt the development methodology. Continuous integration practices, Jenkins, Docker containers, and Arm Fast Models form this foundation, which naturally enhances code quality, bug elimination, and time-to-market by its structure. The building blocks are introduced in blog parts one and two, with the third part rearranging the pieces from a simple example to get started to a more realistic development pipeline layout.

In part one, Docker is installed and configured. A ‘hello world’ application runs in a custom Docker container on a virtual Cortex-M4. This code is considered to work correctly if an output.txt0 file is created with ‘Hello World’ printed inside, which was verified to be the case. In part two, Jenkins is set up and configured to automate the test developed in part one, effectively creating the verification and automation environment foundation to tailor to your exact development needs. If you missed part one, I recommend you view it to get the proper background and the required source files for this example.

Part one

Jenkins

Jenkins is an open source continuous integration server created to automate and integrate your entire development flow (build, test, merge), which can be configured to fit your needs--illustrated rather heavy-handedly by their butler logo. Originally developed under the name Hudson, Kohsuke Kawaguchi built an automation server to know—before committing code to one of Sun’s Java repositories—whether the code was going to work. In 2011 Jenkins spun off of Hudson, and while both continue to exist Jenkins is more popular in the CI world. Bamboo is another common tool, which is a commercial tool with Atlassian support. Choosing the right CI tool depends on your particular situation; I favor Jenkins due to its popularity and abundance of helpful plugins.

The next step towards continuous integration bliss is installing Jenkins. I decided to run Jenkins in a Docker container (having become cozier working with Docker), and the Jenkins documentation provides extensive information on how to accomplish this on their site if you encounter unexplained errors or would like more implementation details.There are many ways to install Jenkins, and instructions for installing in an Ubuntu 16.04 VM using the package manager ‘apt’ is also detailed here. While this blog will focus on two installation types, the post-install Jenkins usage applies to any Jenkins install implementation. I will also be leveraging Jenkins Blue Ocean, which is a graphical tool that greatly simplifies the continuous integration/delivery process and has a great user experience.

Jenkins Install on Docker

With Docker, installing Jenkins with Blue Ocean is a simple two step process for any OS:

  1. Start Jenkins in Docker by running the following command in a terminal/command prompt:
    • docker run \
        -u root \
        --rm \  
        -d \ 
        -p 8080:8080 \ 
        -p 50000:50000 \ 
        -v jenkins-data:/var/jenkins_home \ 
        -v /var/run/docker.sock:/var/run/docker.sock \ 
        jenkinsci/blueocean
    • *Note: For Windows, replace the ‘\’ characters with ‘^’ to enable multi-line command line inputs!
    • Here is a brief explanation for each command:
        1. docker run will run a specified image (noted later in the command) in a new container.
        2. -u notes to run the container as the user ‘root’
        3. –rm is for cleanliness; the container is automatically removed when shut down.
        4. -d is to run the container in the background, in ‘detached’ mode. If not specified, the Docker log for the container will output in the terminal window. Keeping the -d option is helpful for putting this command in a batch/shell script to be tidy and keeps the launching terminal from being attached to the Jenkins instance.
        5. All -p commands map (‘publish’) the container port to a host port. The host port number is first, with the container being second. Port 8080 is used to access Jenkins via a web browser, and port 50000 is to utilize other JNLP-based Jenkins agents on other machines (not required for this example but good to know for master-slave systems).
        6. All -v commands map host ‘volumes’ to the container so the container can use, store, and create data on the host. The first -v command is used to save Jenkins configuration data to the host machine so restarting Jenkins doesn’t mean restarting all your work. The second -v allows Jenkins to communicate with the Docker daemon on the host. This is required to run Docker containers through Jenkins, itself in a Docker container.
        7. The last line specifies the image that the Docker container runs, which is the blue ocean image maintained by Jenkinsci. If this image is not already downloaded on your host machine this ‘Docker run’ command will automatically download the image.
  2. Follow the Post-installation setup wizard
    • In a web browser, navigate to this address: localhost:8080.
    • Follow basic setup instructions.
      • For first-time users only, a special pass code is required which is stored in a local file. The Jenkins setup wizard will guide you through how to find & enter this pass code.
    • Install recommended plugins.
    • Create first admin user with whatever username and password you choose. You will have to remember this to access your Jenkins setup again!

Check that the 'Open Blue Ocean' option is on the left side of the browser when on localhost:8080, as it should be pre-installed with the Docker image. If not, see step 3 of the next section for instructions on how to get the Blue Ocean plugin.

Jenkins Install on Linux

Installing Jenkins through the Advanced Package Tool (apt) is a similarly simple process, especially if you do not want to or cannot host Jenkins within a Docker container. As an extra requirement, however, you must have Java installed for Jenkins to install and work properly, not required when installing through Docker. Type ‘java -version’ into your command-line; if java is not installed, install it with the following terminal command:

sudo apt install openjdk-8-jre

Once java is successfully installed, run these commands from a terminal to install Jenkins:

wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get update
sudo apt-get install Jenkins

Once this is complete, Jenkins should automatically start and is accessible through a web browser at the address localhost:8080.

To properly complete the installation two additional steps are required:

  1. Follow the Post-installation setup wizard (same steps as installing through Docker)
    1. In a web browser, navigate to this address: localhost:8080.
    2. Follow basic setup instructions.
      1. For first-time users only, a special pass code is required which is stored in a local file. The Jenkins setup wizard will guide you through how to find & enter this pass code.
    3. Install recommended plugins.
    4. Create first admin user with whatever username and password you choose. You will have to remember this to access your Jenkins setup again!
  2. Install BlueOcean Plugin
    1. Navigate to ‘Manage Jenkins’ > ‘Plugin Manager’ via the left navigation bars or go to the url: localhost:8080/pluginManager.
    2. Go to the ‘Available’ tab and type in Blue Ocean. Select the top option, it will install dependencies automatically.
    3. Click ‘install without restart’.
    4. Let it install, then refresh the page upon completion and navigate back to localhost:8080. A new option on the left called ‘Blue Ocean’ should be present.

Blue Ocean install Jenkins

With the Jenkins up and running we can create the first pipeline.

Create Pipeline

A ‘pipeline’ refers to two things in this blog: (1) A CI pipeline and (2) A Jenkins ‘pipeline’. A CI pipeline I define as the sequence of steps in a CI flow. For example, a developer wanting to run some unit tests and verify they pass every time they merge code to version control. Their CI pipeline would be to build the software, test the software, then merge the software in version control. A Jenkins ‘pipeline’ is a suite of different plugins that enable Jenkins to script CI pipelines. Unless referring to these plugins, assume that the phrase ‘pipeline’ is referring to the CI pipeline concept. 

A Jenkinsfile defines each step in the pipeline and can be managed and tracked in source control just like any other code. This enables CI best practices to be used with the CI pipeline configuration file itself! Blue Ocean provides an intuitive GUI interface to edit the pipeline code, the Jenkinsfile.

On the left of the Jenkins GUI is a link to ‘Open Blue Ocean’. Click on that and navigate to the ‘New Pipeline’ button if a prompt to create a new pipeline isn’t already front and center on the screen.

Jenkins Pipeline Open Blue Ocean

The next prompt is to select your source control repository, which is required for Jenkins to work properly. I suggest using SSH, the path where I’ve encountered the least resistance with both Jenkins and git. When you enter in your repository URL Jenkins will generate a public SSH key, which you can include in your own Git server.

Upon selecting ‘Create Pipeline’, a new screen will open which is the Jenkins Blue Ocean Pipeline editor. Depending on if you are using Docker to run the Fast Models, as detailed in the last blog part, or running locally on Linux there are two slightly different Jenkins pipelines to configure. Both steps and outputs are detailed here. Note that using Docker is excellent for maintaining environmental consistency among team members and prevents issues across the various people, environments and platforms on your teams.

Setup Simulation in Docker

This method will work regardless of Jenkins install implementation, through ‘apt’ on Linux or through Docker on any OS. Please complete the first part of this blog to ensure your docker environment is set up as expected. To add two steps in the pipeline--running the python script and checking its output--click on the plus sign icon to create the first test stage. Name it whatever you like, mine is called ‘Test’. Next, add the following two steps by clicking on the ‘add step’ --> ‘Shell Script’ buttons:

set +e &&
. //root//ARM//FastModelsTools_11.3//source_all.sh &&
set -e &&
python //root//FMs//run_m4.py &&

head //root//FMs//output.txt0

Because Jenkins will run the Docker container non-interactively, before we can run the Fast Model via python, the Fast Models tools must be sourced manually (typically done upon startup with .bashrc which does not apply as Jenkins does not run docker commands in a login shell). 

Now at the bottom right of the screen, while selecting the ‘Test’ stage just created, select the ‘Settings’ button, set the Agent to ‘docker’ and the Image to zach/fm-m4-example:latest. Upon running the Jenkins pipeline that Docker image will be pulled and the shell scripts will run in the container created. 

Each time a change is made on the Blue Ocean GUI, the underlying Jenkinsfile changes, which dictates the pipeline behavior at runtime. To view the Jenkinsfile code, use the hotkey CTRL-S. The code can be edited using this method as well, which is helpful when the Blue Ocean GUI doesn’t offer the necessary syntax option. However, adding too much code that isn’t represented by the Blue Ocean GUI can get confusing by keeping your system behavior ‘hidden’ from the BlueOcean GUI.

At this point the pipeline script is setup and ready to, and should look like this:

pipeline {
  agent none
  stages {
    stage('Test') {
      agent {
        docker {
          image 'zach/fm-m4-example:latest'
        }
      }
      steps {
        sh '''set +e && 
. //root//ARM//FastModelsTools_11.3//source_all.sh &&
set -e &&
python //root//FMs//run_m4.py'''
        sh "head //root//FMs//output.txt0"
      }
    }
  }
}

Setup Simulation in Host

If you do not want to or need to use Docker to run Arm Fast Model simulations, use the instructions here. Before proceeding, ensure that the Arm Fast Models are initialized on your host Linux system. See the previous blog for instructions on how to accomplish this.

First the requisite files are needed to run the test. Before setting up the Jenkins pipeline, download the files included in part one of this blog series and untar it in a directory in the same git repository this Jenkins pipeline is connected to, which will be referred to here as $BLOGBASEDIR. Rename the directory called ‘ITMtrace’ to ‘plugins’ or you will get an error that the python script cannot find the correct Fast Model plugin. Finally ensure that you commit the git repository with $BLOGBASEDIR included. This will allow Jenkins to see these files in the git repository. 

The steps to run Fast Model simulations on the Linux host are as follows:

  1. Build the hardware virtual platform model. They are not pre-built as the files are large enough to cause problems uploading to some git repositories.
  2. Run the simulation.
  3. Check the results.

The steps will look like this:

Jenkins PyCl master

Add these three steps by clicking on the ‘add step’ --> ‘Shell Script’ buttons and typing in:

set +e && 
. //home//zach/ARM//FastModelsTools_11.3//source_all.sh && 
set -e && 
cd $BLOGBASEDIR//m4_system//model 
&& ./linux_build.sh

set +e && 
. //home//zach/ARM//FastModelsTools_11.3//source_all.sh && 
set -e && 
python $BLOGBASEDIR//run_m4.py

head $BLOGBASEDIR//output.txt0

Replace the paths to the Fast Models tools with the correct ones for your system; the ‘output.txt0’ file will be placed where the run_m4.py script is located, also in $BLOGBASEDIR.

Run Pipeline

To save and run the script, press ‘Save’ in the top right, followed by ‘Save & run’. This will send you to a new page which records current and past runs. Selecting the current run will show different stats, along with the overall results. Sometimes I’ve had to refresh the page to have results show up in a timely manner; this test takes ~10-20seconds to run on my laptop.

To save & run the script, press ‘Save’ in the top right, followed by ‘Save & run’. This will send you to a new page which records current and past runs. Selecting the current run will show different stats, along with the overall results. Sometimes I’ve had to refresh the page to have results show up in a timely manner; this test takes ~10-20seconds to run on my laptop. If you are building the virtual model on the Linux host machine as well (following steps for setting up the simulation on the Linux host, not in docker), then the test will take 1-2min. This screenshot shows a run completed from running on the Linux host machine, with the $BLOGBASEDIR being 'blog/' in this case:

Jenkins Docker test run

The pipeline with name ‘PyCI’ ran its 24th test successfully (I tried this example a few times), and the expected output from the output.txt0 file is displayed. Important to note that if output.txt0 was not generated the Jenkins pipeline would stop at that step and give an error, which means that in this simple case whenever the pipeline passes our code can be considered correct (outputting some ‘hello’ message!). This type of test infrastructure can be extended to file existence checks, equivalence checks, content verification, customized test report analysis, code coverage, etc…there are many different techniques to verify that your software is running as expected and Jenkins supports a huge range of methods. This Jenkins pipeline can also be set up to be run before any commit to a certain git branch, can include merging branches when all tests pass, ensuring that certain branches are always working at a defined state. More of these techniques will be covered in subsequent blogs.

Conclusion

Leveraging Jenkins, Docker, and Arm tools for embedded software development saves time and increases quality by providing a consistent and automated CI workflow across a development team. The possibilities for streamlining software development with these tools are simultaneously expansive and approachable. In part three this simple example demonstrating the base functionality of the tools will be expanded to form a realistic use case. Learning to drive and play with a toy car is helpful to a point, but ultimately test driving a real car will drive the point home (pun intended). The final blog of this 3-part series will turn this toy car into a red Lamborghini.

To get the source files to run this example yourself, and to set up the correct Docker environment for both running Jenkins in a Docker container and simulations in Docker, see part one of this blog here. The link to part three, once available, will be put below.

Continue on to Part 3 when available.

Anonymous