Today at Hot Chips in Cupertino, I had the opportunity to present the latest update to our Armv8-A architecture, known as the Scalable Vector Extension or SVE. Before going into the technical details, key points about Armv8-A SVE are:
I’ll first provide some historical context. Armv7 Advanced SIMD (aka the Arm NEON instructions) is ~12 years old, a technology originally intended to accelerate media processing tasks on the main processor. It operated on well-conditioned data in memory with fixed-point and single-precision floating-point elements in sixteen 128-bit vector registers. With the move to AArch64, NEON gained full IEEE double-precision float, 64-bit integer operations, and grew the register file to thirty-two 128-bit vector registers. These evolutionary changes made NEON a better compiler target for general-purpose compute. SVE is a complementary extension that does not replace NEON, and was developed specifically for vectorization of HPC scientific workloads.
Immense amounts of data are being collected today in areas such as meteorology, geology, astronomy, quantum physics, fluid dynamics, and pharmaceutical research. Exascale computing (the execution of a billion billion floating point operations, or exaFLOPs, per second) is the target that many HPC systems aspire to over the next 5-10 years. In addition, advances in data analytics and areas such as computer vision and machine learning are already increasing the demands for increased parallelization of program execution today and into the future.
Over the years, considerable research has gone into determining how best to extract more data level parallelism from general-purpose programming languages such as C, C++ and Fortran. This has resulted in the inclusion of vectorization features such as gather load and scatter store, per-lane predication, and of course longer vectors.
A key choice to make is the most appropriate vector length, where many factors may influence the decision:
Rather than specifying a specific vector length, SVE allows CPU designers to choose the most appropriate vector length for their application and market, from 128 bits up to 2048 bits per vector register. SVE also supports a vector-length agnostic (VLA) programming model that can adapt to the available vector length. Adoption of the VLA paradigm allows you to compile or hand-code your program for SVE once, and then run it at different implementation performance points, while avoiding the need to recompile or rewrite it when longer vectors appear in the future. This reduces deployment costs over the lifetime of the architecture; a program just works and executes wider and faster.
Scientific workloads, mentioned earlier, have traditionally been carefully written to exploit as much data-level parallelism as possible with careful use of OpenMP pragmas and other source code annotations. It’s therefore relatively straightforward for a compiler to vectorize such code and make good use of a wider vector unit. Supercomputers are also built with the wide, high-bandwidth memory systems necessary to feed a longer vector unit.
However, while HPC is a natural fit for SVE’s longer vectors, it offers an opportunity to improve vectorizing compilers that will be of general benefit over the longer term as other systems scale to support increased data level parallelism.
It is worth noting at this point that Amdahl’s law tells us the theoretical limit of a task’s speedup is governed by the amount of unparallelizable code. If you succeed in vectorizing 10% of your execution and make that code run 4 times faster (e.g. a 256-bit vector allows 4x64b parallel operations), then you've reduced 1000 cycles down to 925 cycles, providing a limited speedup for the power and area cost of the extra gates. Even if you could vectorize 50% of your execution infinitely (unlikely!) you've still only doubled the overall performance. You need to be able to vectorize much more of your program to realize the potential gains from longer vectors.
So SVE also introduces novel features that begin to tackle some of the barriers to compiler vectorization. The general philosophy of SVE is to make it easier for a compiler to opportunistically vectorize code where it would not normally be possible or cost effective to do so.
SVE is targeted at the A64 instruction set only, as a performance enhancement associated with 64-bit computing (known as AArch64 execution in the Arm architecture). A64 is a fixed-length instruction set, where all instructions are encoded in 32 bits. Currently 75% of the A64 encoding space is already allocated, making it a precious resource. SVE occupies just a quarter of the remaining 25%, in other words one sixteenth of the A64 encoding space, as follows:
The variable length aspect of SVE is managed through predication, meaning that it does not require any encoding space. Care was taken with respect to predicated execution to constrain that aspect of the encoding space. Load and store instructions are assigned half of the allocated SVE instruction space, limited by careful consideration of addressing modes. Nearly a quarter of this space remains unallocated and available for future expansion.
In summary, SVE opens a new chapter for the Arm architecture in terms of the scale and opportunity for increasing levels of vector processing on Arm processor cores. It is early days for SVE tools and software, and it will take time for SVE compilers and the rest of the SVE software ecosystem to mature. HPC is the current focus and catalyst for this compiler work, and creates development momentum in areas such as Linux distributions and optimized libraries for SVE, as well as in Arm and third party tools and software.
We are already engaging with key members of the Arm partnership, and will now broaden that engagement across the open-source community and wider Arm ecosystem to support development of SVE and the HPC market, enabling a path to efficient Exascale computing.
Following on from the announcement and the details provided, initial engagement with the open-source community will start with the upstreaming and review of tools support and associated standards.
A Beta release of the SVE supplement to the Armv8-A Architecture Reference Manual is now available to download.
Annotated SVE VLA programming examples can be found here:
[CTAToken URL = "https://developer.arm.com/hpc/a-sneak-peek-into-sve-and-vla-programming" target="_blank" text="Download - A Sneak peak into SVE and VLA Programming" class ="green"]
My reading of it is that they do not tell the user if a page is actually present or not - they say if the virtual address is invalid or not and the page table entries must be able to distinguish those two possibilities.
Having had another read I believe you are right, they are intending that it tests if the page is actually in memory, or at least that is an allowable way for it to act and probably what most implementations would do as it is could be quite a bit easier to implement. So one has to be a little more careful than I was thinking of and only consider it a real fault when the first one isn't found.
Yes it does leak some paging information. I am quite amazed at some of the tricks hackers have managed to pull off - but I don't see any particular security implications in that. It might be counted as a failure of virtualization I guess if we wanted to guarantee a program's execution was completely insensitive to paging. In that case one would have to somehow do something like I scratched out above.
It would even be allowable I guess for it to have a limit on the number of accesses it does per instruction and set the mask false for any more that are requested irrespective of whether they would succeed.
I know this is likely not the right place to ask questions about SVE but I can't really find this info or a better place to ask about this anywhere.
My question is, how does the first-faulting class of instructions interacts with paging? Does it always "fault" at page boundary? Does it create different kinds of faults (i.e. a special fault needs to be setup for paging by the kernel)? Or does it leak paging info to userspace (i.e. a user process can use such a instruction to tell if a page is swapped out without letting the kernel know by doing a load on the page boundary)?
Edit: according to the pdf linked above, it seems that it's the last one. Can this be possibly a security problem?
Thanks very much for that pdf of the conference talk with all those details.
And my first action was to look at the very end at strcmp, and if i'm reading it right it has a mechanism for coping with strings being near the end of a page. Horray for that!
...
And a good look through it and I think it looks very good indeed. I hope it becomes a standard feature even if only with the minimum width.