Linaro provides a complete software stack for secure boot, u-boot, and Linux. This article explains how to build and run this software on Arm Fast Models. Although there is some existing information on this topic, there are two assumptions that often create challenges when trying to apply the Linaro deliverables to an actual project. These are:
This article provides some confidence that the Linaro deliverables can be adapted when a project is a little different from the default assumptions. It also provides some insight into what is really happening and how to debug the flow when just following the instructions doesn’t work.
The default configuration is to run on the Arm AEM (architectural envelope model) with a dual-cluster system and one core in each cluster. For this exercise, let’s imagine a new embedded Linux project is starting and it uses a different CPU configuration, a Cortex-A53 single cluster, quad core configuration.
In addition to changing the CPU configuration, another challenge is Ubuntu seems to be the most popular host operating system in the software development community. In contrast, the EDA community continues to focus on Red Hat Enterprise Linux (or the CentOS flavor). In 2018, the focus is RHEL 6 and 7. This is primarily due to the cost of software testing and the impact to retest when a new operating system is introduced. Each EDA company publishes operating system roadmaps, and I checked one from Cadence and one from Synopsys to confirm that Red Hat Enterprise Linux 7 is still a primary platform in 2018.
This poses a challenge for a project with embedded Linux development and EDA tools for chip and system development. Although every company is unique, a reasonable assumption is that the software team and hardware team should work on the same network with the same host operating system if they want to share information during the early software development and hardware validation phases of the project. After all, this is a big benefit of using virtual prototyping; improved communication among the various teams in a project. For the purposes of this example, let’s say everybody is using Red Hat Enterprise Linux 7 (or the CentOS 7 equivalent).
With the host operating system and the target CPU configuration set, let’s see how to use the Linaro deliverables under these slightly different conditions. Hopefully this example will provide enough background to enable readers to adjust as needed for other combinations.
The current version of Arm Fast Models will be used, this is version 11.2. Another assumption is that most embedded projects will need to change the hardware configuration in some way and will want to learn how to do this with Arm Fast Models and create a virtual prototype that best represents the new system being designed.
There is a short HOW TO article on the Arm community which gives an overview of the build and run process. There is an option to run pre-existing binaries, but to do any software development all the software should be compiled from source so it can be changed and debugged. For this reason, only the build from source path is covered here.
The target for this article is 64-bit Linux. No information about Android is provided, which is also supported by the Linaro software stack and Fast Models. Android can be a topic for another time.
Make sure to install git and python3 on the machine before getting started. One of the challenges with any Linux distribution is how to make sure all the required tools are available, and if not, to get them installed. On a network of Linux machines, this may require a system admin to get involved since individual users may not have the ability to install missing packages. On RHEL 7 or CentOS 7, the ‘yum’ command can be used to install missing packages.
For my trial, I setup a machine with CentOS 7 and selected a starting configuration with Gnome desktop and all software developer tools. This will get most of what is needed, but some extra packages are always required.
To get started, perform the git setup, and change your info as needed. It doesn’t really matter since it will not be used to commit any changes to any of the repositories.
$ git config --global user.name "John Doe" $ git config --global user.email "john.doe@example.com"
Get the workspace setup script from the HOWTO link or paste the following into a browser address bar, or better yet use wget to retrieve it:
$ wget https://community.arm.com/cfs-file/__key/telligent-evolution-components-attachments/01-3485-00-00-00-01-24-83/workspace_5F00_1707.py
This will download the workspace_1701.py script.
Before I even got started, I found python3 is missing. Searching, I found it can be installed and checked using:
$ sudo yum install https://centos7.iuscommunity.org/ius-release.rpm $ sudo yum install python36u $ sudo yum install python36u-pip $ sudo yum install python-pip $ python3.6 -V Python 3.6.4
It’s worth attempting to run the workspace setup now to see how far things go. Make sure to do this in a new, empty directory. I created a directory named linaro. Navigate there and put the python script in the new directory.
To setup the software compilation area run the downloaded python script.
$ python3.6 workspace_1707.py
The first try failed, due to some code in the python script trying to call dpkg-query, which is from the Debian package manager used by Ubuntu. Starting at line 963, there is some code which is not going to run on RHEL 7 which is checking for extra packages which need to be installed.
print("\nChecking dependencies... ", end="") missing = [] for d in SCRIPT_DEPS: if not gotdep(d): Script.Log("Missing "+d) missing.append(d) if missing: missing.sort() Script.Abort("The following packages are missing:\n"\ +reduce(lambda x,y: x+y, \ map(lambda m: "\n - "+m,missing))+"\n\nPlease install"\ "install these missing packages using the following "\ "command:\n\n$ sudo apt-get install"+reduce(lambda\ x,y: x+y, map(lambda m: " "+m, missing))) print("OK.")
To try again, comment out everything between the first and last print statements by putting # at the start of each line.
Before retrying, add the packages that the above code is looking for:
$ sudo yum install openssl-devel $ sudo yum install python36u-devel $ sudo yum install libuuid libuuid-devel $ sudo pip3.6 install crypto $ sudo pip install pycrypto $ sudo pip install wand $ sudo yum install ImageMagick-devel
The most difficult part of the build is the python dependencies. I’m fairly certain both python2 and python3 are used, and making sure the right version of each package can be a challenge.
The choices to target 64-bit Linux with a busybox ramdisk to be run on the Base Platform FVP are shown below.
The answers to the questions are 3, 1, 1, 2, 1, and y to configure the workspace.
$ python3.6 workspace_1707.py ## Please specify your platform: 1) [64-bit] Juno 2) [32-bit] Juno 3) [64-bit] AEMv8-A Base Platform FVP 4) [32-bit] AEMv8-A Base Platform FVP 5) [64-bit] ARMv8 Foundation Model FVP 6) [32-bit] V2P-CA15x2_CA7x3 (TC2) > 3 ## Please specify whether you want to: 1) Build from source 2) Use prebuilt config > 1 Checking dependencies... OK. ## Please select an environment: 1) Linux/Android 2) Baremetal > 1 ## Please select a kernel: 1) lsk-4.4-armlt -- Supports Android 2) latest-armlt > 2 ## Please select a filesystem: 1) BusyBox -- Built from source 2) OpenEmbedded Minimal -- 15.09 3) OpenEmbedded LAMP -- 15.09 > 1 Your chosen configuration is as follows: +-------------+------------------------------------+ | Workspace | /home/jasand01/linaro/ | | Platform | [64-bit] AEMv8-A Base Platform FVP | | Build | Build from source | | Environment | Linux/Android | | Kernel | latest-armlt | | Filesystem | BusyBox Built from source | +-------------+------------------------------------+ Proceed with this configuration? [y/n] > y
This is an appropriate time to take a break while all the tools and software are downloaded. The python script is downloading the GCC compiler for Arm and all the repositories for the various projects which make up the software stack.
When the script is done messages will appear with the next instructions:
Workspace initialised. To build: chmod a+x <workspace>/build-scripts/build-all.sh <workspace>/build-scripts/build-all.sh all Resulting binaries will be placed in: <workspace>/output/fvp/fvp-busybox/ For more information, tutorials, FAQs, and discussions, please see here: https://www.community.arm.com/tools/dev-platforms/ Thank you for using the Linaro ARM Platforms workspace script.
Many new directories are created with the source code for the various projects, and the next step is build the software.
To build the software for the target Cortex-A53x4 Linux system run the build script:
$ build-scripts/build-all.sh all
This is another appropriate time to take a break while all the software is compiled.
When the compilation is all done the output/ directory will contain the generated software. This includes the busybox filesystem, Linux kernel, Linux device tree, u-boot, and trusted firmware.
You will need about 7 Gb of disk space to complete the build.
If the build fails, especially after the Linux kernel build is complete, it’s going to time consuming to start over every time. To save time, there are a number of .sh files in the build-scripts/ directory which build each component. These can be called individually to build that component. For example, if the u-boot fails during build-all.sh it would be better to just invoke:
$ build-scripts/build-uboot.sh build
Each of the .sh scripts can have an argument of clean, build, or package.
The results of the build go into the output/ directory with the platform name as a subdirectory, in this case the platform is fvp.
The software can now be run on a Fast Model system which includes the Cortex-A53 quad-core configuration. The hardware system is referred to as the Base FVP. The Base FVP documentation for the system details is available on the Arm Developer website.
Although Arm Fast Models includes a version of the Base FVP available with the desired Cortex-A53 quad-core configuration, it is not included with DS-5 Development Studio. DS-5 includes FVPs for FVP_Base_Cortex-A53x1 (single core) and some multi-cluster versions, such as FVP_Base_Cortex-A72x2-A53x4, but no single cluster Cortex-A53x4. To create the needed configuration, Fast Models can be used. This is the right place to start anyway, since most projects need to modify the virtual platform to match a new hardware design.
The hardware system is located at $PVLIB_HOME/examples/LISA/FVP_Base/Build_Cortex-A53x4
Since the Fast Models installation directory is not always writable, it’s easier to copy it to a scratch area and compile the system. Go to any area with write permission and run the commands below to generate the Fast Model System.
$ mkdir base-a53x4; cd base-a53x4; cp -r $PVLIB_HOME/examples/LISA . $ cd LISA/FVP_Base/Build_Cortex-A53x4
Before building the model, the appropriate compiler needs to be set. On Linux, the default setup is gcc 4.8. This is set at the top of the file FVP_Base_Cortex-A53x4.sgproj file:
ACTIVE_CONFIG_LINUX = "Linux64-Release-GCC-4.8";
If necessary, change this to the desired compiler. For example, on Ubuntu 16.04, the appropriate gcc is version 5.4 so the line is edited to be:
ACTIVE_CONFIG_LINUX = "Linux64-Release-GCC-5.4";
For a standard Ret Hat Enterprise Linux 7 or CentOS 7, no changes should be needed.
Now, build the platform using:
$ ./build.sh
The build will generate Linux64-Release-GCC-4.8/isim_system where the name of the directory will vary based on the compiler version used.
The isim_system is the executable which will be run to simulate the hardware and run the software.
To run the compiled software on the Fast Model system a couple of things are needed.
The Base Platform will need xterm and telnet for the UART models. If these are not installed add them using yum.
$ sudo yum install xterm $ sudo yum install telnet
Next, set the environment variable MODEL to point to the isim_system built in the previous step, it must be an absolute path. For example, using bash, I did:
$ export MODEL=/home/jasand01/base-a53x4/LISA/FVP_Base/Build_Cortex-A53x4/Linux64-Release-GCC-4.8/isim_system
Now that the model is ready, the run script needs a small adjustment. The run script is setup for the Arm AEM (architecture envelope model) and not for the Cortex-A53. This means a slight adjustment is needed to remove a parameter which sets the number of cores. The Cortex-A53 quad-core system is fixed with 4 cores and the number of cores is not configurable at runtime as it is for the AEM.
Edit the script model-scripts/run_model.sh and remove line 298 which is just
$cores \
Once the edit is made, start the simulation with the run_model.sh script and point to the compiled software.
$ ./model-scripts/run_model.sh output/fvp/fvp-busybox/uboot
If the change to the run script was not done correctly, a parameter will be passed to the simulator -C cluster0.NUM_CORES=1 and the simulator will not start because NUM_CORES is not a parameter for this hardware system.
The message will be:
Warning: (W1017) parameter error: 'cluster0.NUM_CORES': parameter not found
If all goes well, the Fast Model simulation will start and the various software will load and run. There are 4 penguins on the LCD showing a quad-core system. Use
# cat /proc/cpuinfo
in the terminal to see the 4 Cortex-A53 cores are running. Here is the screenshot.
The Linaro deliverables can be complex, but the described python workspace script makes it easy to get the entire thing running on Fast Models. This can be used as baseline for a running system and software for engineers to do custom software development for a new project. Arm Fast Models make it easy to confirm proper software operation, learn about the Linaro deliverables, and figure out how to make best use of these deliverables on new projects.
If you have not used Fast Models for software development, give it a try by requesting an evaluation license using the button below.
Fast Models Downloads
Hi,Jason, could you provide this python script for us as we want to reproduce the env for Arm Cortex-A55?