There are a few common questions that we get from customers when talking about virtual prototypes for software development which we will address in this blog.
Virtual prototypes (VPs) help developers get products to market faster by enabling software development prior to silicon availability, or with limited hardware access. VPs are simulation models of systems, or sub-systems that enable software development. They bring most value when used in parallel with the hardware design, but still have a place once the hardware is completed. Using virtual platforms reduces the lag between hardware delivery and software readiness, speeding time to market.
Within Arm we make extensive use of virtual prototypes when developing the development tools and software stacks in support of our IP. We do this so that our software is ready at the same time as the hardware becomes available as that is what our customers (and we) expect. It is the models (and a team of talented people) that let us do this. These models are also used for architecture exploration and IP validation.
Arm’s VP capability is centered around the programmer’s view models of our IP called Fast Models. We deliver VPs to our partners in two ways. Firstly there is a library pre-built platform called Fixed Virtual Platforms (FVPs) available off-the-shelf for many of our processors and IP. Secondly there is the portfolio of models from which a partner can build their own VP using either Arm tools or solutions from our ecosystem partners.
With that in mind, let us move on to the common questions about virtual prototypes.
The source code for the FVPs is supplied as examples in the Fast Models product. These examples can be edited and extended as needed and can be used either as the start point for a custom VP or reference if designing your own. You will need a Fast Models license for this activity.
The “fixed” aspect of the pre-built FVP is in the composition. It is supplied as a binary object so you can’t, for example, change components. However, there is still the scope to change many aspects of the platform through the comprehensive set of parameters that the FVP provides. Through parameters different functional blocks can be enabled or disabled and memory and cache configurations changed and so on.
Yes and no! Inasmuch as the virtual platform simulates the instruction set one could consider it to be an Instruction Set Simulator (ISS). However, the big difference is that a VP is a functional model of a system, or subsystem, albeit one modelled at an abstract level known as either “programmer’s view (PV)” or “loosely timed (LT)”. The VP simulates all the processors in the system, along with system IP and graphics subsystems. The processors could be different Arm cores or a mix of Arm processor IP from other vendors. An ISS would typically not model implementation defined features of the core (e.g. caches) and unpredictable behaviors. These are modelled in the Arm VPs.
As we have already mentioned these are abstract models. They don’t have detailed micro-architectural detail of the cores and they don’t implement the full timing. These are usually transparent to the software running on the processors. For many developers, this raises a question about how much trust can be put in the simulation results.
The models are validated against the same compliance kits and validation suites that are used in the IP validation process, ensuring their accuracy against the modelled hardware. The end result is that – in general – software that runs on the VP will run unmodified on the hardware. However, sometimes there may be reasons to stub out hardware components in the VP and that will entail software modifications.
All simulation models trade off detail against performance. VPs are no exception to this. To broaden the scope of where these LT models can be used, we have developed various plug-in modules – pipeline models, branch prediction models, prefetch models – that improve the correlation of the model and the target IP at a cycle level. These are all switchable so that they do not impact simulation performance when not in use. Which leads us into the next question…
Simulation models will execute software slower than the target hardware in most cases. However the questions are whether this is a critical issue and what an acceptable level of performance is. There are many factors that affect simulation speed. We mentioned the level of detail in the model: this can have a big effect and care needs to be taken in writing efficient models to maintain simulation speed. One of the biggest speed boosts can often be as simple as using a workstation with a faster CPU and/or more memory to run the simulations. Simulations can be batched up and run in parallel using server farms. The detailed step-by-step execution is often not important, so the user isn’t typically waiting on the simulation. The simulation can run in the background while they get on with other jobs (as I write this blog I’ve got a model running).
Typically, the VP will boot a high-level OS in a handful of minutes. Whilst not real-time, this is sufficient for most use cases.
Given that you don’t need special purpose hardware to run virtual models/prototypes on and the inherent flexibility of the VP, it makes them the ideal environment for software development based around continuous integration. It is something we use here and many of our partners do, too.
You can have tens or hundreds of simulations running on VPs distributed across compute farms. And these can be scripted up and the results analyzed programmatically. So yes, VPs are the ideal platform for continuous integration.
I’m happy to get involved with answering any other questions about virtual prototypes that you may have - so please, leave a comment below, or send a private message. You can also download and evaluate Fast Models and Fixed Virtual Platforms below.
Evaluate Fast Models
Evaluate Fixed Virtual Platforms