Finding one's way through references to Arm processors is not always obvious. This article is the first of a series on Arm fundamentals that will introduce various topics to help you get more familiar with the Arm architecture. It aims at helping you to better understand Arm processors, starting with explaining how they are named, and then showing how knowing your processor matters by introducing a few of their recent features.
If you are curious about what is in your pretty electronic device or are a developer willing to understand how to start getting the best out of your processor, you may find some useful information here. The second part of the article may be technically a bit more challenging than the first, but don't worry! The few code samples are only concrete examples used to illustrate the explanations. The specific details are not necessary to understand the global picture.
The first step is to understand how Arm processors are referenced: it certainly sounds nice, but what is this "dual Cortex-A9, based on Armv7" in your super-phone?
The important thing to recognise is the difference between a processor's family name and the instruction set architecture (ISA) version it implements.
So a "dual Cortex-A9, based on Armv7" is a processor with two Cortex-A9 cores and implementing the 7th version of the Arm architecture. The technically correct naming for this processor is a Cortex-A9 MPCore processor (based on Armv7), comprising two Cortex-A9 cores.
Note that the processor family is sometimes misleadingly substituted with the actual processor's name. You may for example find a reference to an "Armv6 Arm11 processor". There are actually no "Arm11" processors, but rather Arm1136, Arm1156, or Arm1176 processors, so the "Arm11 processor" refers to "a" processor of the Arm11 family.
Before the Cortex family, from Arm1 up to Arm11, processors were named after their family with suffixes to specify each processor's specificities.
Here are a few examples of suffixes. The details are a bit technical, but you can find more information here [1].
Letters indicate specific features of the processor. For example:
- 'F' indicates that the processor has a VFP floating point unit.
- 'T' or 'T2' means that the processor is able to use the Thumb or Thumb2 instruction encoding.
Digits detail hardware characteristics of the processor.
- For example, for an Arm946 processor, the '4' indicates a cache and memory protection unit and the '6' a tightly coupled SRAM interface.Note that these suffixes are often omitted if they are not relevant to the context, or implied for newer processor that always implement the related features. You will for example not find the 'T' suffix for Cortex processors: they all handle Thumb!Letter suffixes are also sometimes appended to the architecture name to show that one or a few specific extensions are available. You may for example see references to Armv4T to refer to the Arm architecture version 4 with the Thumb extension.
Examples of Arm families, architectures, and processors.As you can see in this table and in the diagram below, a family is not restricted to implement only one architecture version: two different processors from the same family can implement different architecture versions.You may also have noticed here the '-A', '-R', and '-M' suffixes used both with architectures and processor names. These do not indicate an extension but rather an architecture profile.
The latest Cortex family includes a wider range of processors than earlier families. These processors are suitable for very different kinds of application, and three profiles were therefore introduced to distinguish what targets they are adapted to:
The following diagram gives a few examples of Arm processors and an idea of relative performance between processors.
Note that it clearly does not detail all processors, nor does it intend to reflect the exact performance comparisons between processors.
In our previous example, the dual Cortex-A9 processor is thus a two core processor from the Cortex family in the application profile. This is currently amongst the highest performing processors you will find in a phone.The next step after understanding your processor's name is to start figuring out how it works and how to efficiently use it.
Knowing the features offered by your target processor and how it works is an important step to designing high-performance software: its best if the software does not exploit the hardware capabilities.This section will highlight this by introducing a few features allowing high performance in recent Arm processorss.
Each new architecture version can be considered as refining the previous one. New features like extensions or instructions are added and enable new capabilities for the software. New architecture versions are backward compatible, except for a few rare cases of instructions (eg. deprecated instructions; see the Arm Architecture Reference Manual).Your compiler should know how to optimize the generated code for a specific target. So using these features can be as easy as specifying what target you are compiling for.If you are compiling natively you may want to check hat your compiler is correctly configured. If you are cross-compiling you will need to pass various options to let it know about your target processor. Refer to your compiler documentation to learn more about the available options:
$ arm-none-linux-gnueabi-gcc --target-help The following options are target specific: -mabi= Specify an ABI -march= Specify the name of the target architecture -mcpu= Specify the name of the target CPU Known Arm CPUs (for use with the -mcpu= and -mtune= options): cortex-m0, cortex-m1, cortex-m3, cortex-m4, cortex-r4f, cortex-r4, cortex-a9, cortex-a8, cortex-a5, arm1156t2-s, mpcore, ... [...]
If you are writing assembly you will need to refer to the Arm Architecture Reference Manual to see what instructions are available for your target architecture.Here is the entry for the 'CLZ' (Count Leading Zeros) instruction:
We can see it is available for architecture versions Armv5T and higher. If 'CLZ' is unavailable or unused the code needed to perform the same operation could look like this:
movs scratch, input @ Preserve the input.moveq result, #32 @ If the input is null return 32.movne result, #0 @ Else setup the result to 0.beq .done @ If the input is null we are done..CountLeadingZerosInInput:lsls scratch, scratch, #1 @ Shift left by one bit and set the condition flags.addcc result, result, #1 @ Increment the result if the bit shifted away was a zero.bcc .CountLeadingZerosInInput @ If we are not done, jump back to check for the next bit..done:
A single 'CLZ' instruction is much more efficient!This is just one of many other instructions and features that were introduced by the latest architecture versions to allow for faster and denser code, including:New instructions, for example for...
New extensions like...
Various extensions are available that can greatly improve the performance of specific tasks. For example most high-end Arm (including Cortex-A8 and Cortex-A9) processors will have a VFP floating point unit, that provides hardware support for IEEE-754 floating point operations.
Hardware support can bring massive improvement over emulated software support. Here is a comparison of the code generated for the floating-point multiplication operation in the C code below, compiled with and without VFP enabled [2]:
double vmul(double a, double b) { return a * b; }
With VFP disabled, floating point support is emulated with software.
Sample of the output of [3]:
$ arm-none-linux-gnueabi-gcc -O1 -g vmul.c -o vmul $ arm-none-linux-gnueabi-objdump -S vmul
double vmul(double a, double b) { [...] bl __aeabi_dmul @ Call the library function aebi_dmul. [...] } [...] __aeabi_dmul: @ Library function handling 64bits ieee-754 floating-point multiplication. push {r4, r5, r6, lr} mov ip, #255 ; 0xff orr ip, ip, #1792 ; 0x700 ands r4, ip, r1, lsr #20 andsne r5, ip, r3, lsr #20 [145 instructions omitted] mov r0, #0 pop {r4, r5, r6, pc} orr r1, r1, #2130706432 ; 0x7f000000 orr r1, r1, #16252928 ; 0xf80000 pop {r4, r5, r6, pc}
The previous __aeabi_dmul assembly code is Copyright © 2009 Free Software Foundation, Inc.
With VFP enabled, efficient instructions are available.Sample of the output of:
$ arm-none-linux-gnueabi-gcc -O1 -g -c vmul.c -mfpu=vfp -mfloat-abi=hard $ arm-none-linux-gnueabi-objdump -S vmul.o
double vmul(double a, double b) { vmul.f64 d0, d0, d1 @ d0 = d0 * d1 [...]
The Technical Reference Manual for your processor will give you more details about instructions timing. Here is the entry of the Cortex-A9 FPU TRM for the vdiv instruction:
You can refer to the actual document for detailed definitions; what it tells us is that the result of the vmul.f64 d7, d6, d7 will be ready after 6 cycles.A standard simple arithmetic instruction will execute in one cycle. Although the software algorithm emulating the multiplication may not execute its 150 or so instructions for every input, the VFP code will obviously be much faster.
Recent Cortex-A profile processors can feature the NEON SIMD engine, that can be used to speed up media applications.The idea behind SIMD (Single Instruction Multiple Data) is to perform operations on multiple inputs in parallel rather than sequentially. This is especially useful in video or audio applications, which process large amounts of data.
Consider for example an alpha blending operation which the background is opaque and the overlaid image has an alpha equals to 0.5. The result image is given by:
alpha_out = 1 RGB_out = RGB_scr1 * 0.5 + RGB_scr2 * 0.5
Each result pixel only depends on two input pixels. There is no need to know about surrounding pixels to compute the result, and thus the operations can be performed in parallel.Supposing that our alpha and rgb values are encoded as 8-bit integers, the following architecture can be used to perform parallel operations:
NEON uses a specific set of instructions that allow for operations ranging from simple integer or floating-point arithmetic operations to more complex permutation and memory operations suited to media codecs.
Suppose that the green values for 16 successive pixels of our two input images are loaded in the 128-bit wide NEON registers q1 and q2. Instead of operating on each pixel separately, the following NEON instruction is enough to compute the 16 green result values at the same time:
vhadd.U8 q0, q1, q2 @ Half each 8-bit value in q1 and q2 and add channels together.
Other convenient instructions will help loading from or storing to memory with interleaving or deinterleaving. Without such SIMD instructions, the code would have to perform these halving and adding operations separately for each pair of 8-bit values.For more information on how to efficiently use NEON you can refer to this series of articles.There is much more to know about Arm hardware and software. Although the features shown here are only a sample of what Arm processors can offer, Arm processors should hopefully not sound like a complete mystery any more. You can also look out for the next articles on Arm fundamentals, where I will focus on more specific topics.
_______________________________________________________________________________
- DMI : Debug, enhanced Multiplier, embedded Ice
- E : DSP-like Extensions
- F : VFP (Vector Floating Point)
- J : Jazelle
- S : Synthesizable
- T or T2 : Thumb or Thumb2
- Z : TrustZone extension
- Armx1z : Cache and MMU
- Armx2z : Cache, MMU, with "Process ID" support
- Armx3z, Armx5z, Armx7z : Cache, MMU, with physical address tagging
- Armx4z : Cache and MPU (protection unit, no virtual memory)
- Armx6z : Write buffer but no caches
The code was compiled with arm-none-linux-gnueabi-gcc (Sourcery G++ Lite 2010q1-202) 4.4.1
This code and its following disassembly is only used as a reproducible example. We focus here only on the code generated for the multiplication operation. There would be much to say about the vfp abi used in the two cases.
You will need to declare an additional empty main function to compile the previous code with this command.
void main() {}
Here we compile an executable to get the disassembly of the __aeabi_dmul routine in the objdump.