At Oski, we’ve embedded ourselves in the world of Formal verification because we truly believe in the exhaustive nature of Formal to achieve significant confidence in design and verification sign-off. So, it doesn’t surprise me that Arm’s initial experience with Formal compelled them to employ a much deeper Formal sign-off strategy with their latest design. The endeavor resulted in a significant amount of and quality bug detection but as with any project, there are lessons to be learned about the best ways to take full advantage of what Formal has to offer.
At the latest Decoding Formal event at Oski, Vikram Khosa of Arm provided users of Formal with a comprehensive look into how Arm is looking to even further improve their Formal verification strategy, but before we go there, let’s give you a brief background on their project and how Formal was used.
On previous designs related to Cortex-A57/A72, Formal was used but with a small Formal team inside Arm sporadically utilizing homegrown methodologies and only piloted testbenches on a few select areas. Despite this limited amount of Formal use, achievements with Formal were significant enough to prompt deploying a full Formal sign-off methodology on their next Cortex-A design.
The Next Gen project applied a mix of light and focused Formal efforts that not only included sign-off on the verification side to analyze proof depths and track and analyze coverage, but also sought to get the design teams more involved upfront. Formal implementation started with higher-level planning to map out the scope and list of deliverables for target units spanning the entire CPU including Instruction Fetch, Core, and Memory System with an early estimation of time and resources. Unit Formal tesbench planning used Oski-certified test plans based on a proven Oski methodology. The block diagram below shows the areas where Formal sign-off was utilized.
Formal sign-off used in the green shaded areas on the block.
To reach sign-off, Arm employed a comprehensive set of interface properties (with some requiring significant state-tracking), end-to-end checkers, initial-value abstractions, and abstraction models. A primary goal was to debug or explain all counter examples and unreachable states. The team set up target metrics related to property pass rates, unreachable cover rates, proof-core coverage, end-to-end checker development, interface properties, X-Prop properties, and forward-progress checkers.
Getting the designers involved was critical. To enable this, a number of items had to be implemented. Convenient macros to ease writing embedded properties were provided as well as assistance with interface property definitions by allowing designers to express properties in English while the verification team did the translations. The team trained the designers on the various tools and flow capabilities depending on the designers’ overall engagement. Most of the designers’ contributions included providing an embedded properties and interface properties with some designers performing reactive counter-example debug.
Several methodologies and flows were either inherited, adopted or developed during the project including planning and tracking; and forward-progress checking methodologies from Oski. Custom flows that were developed in-house or through collaboration with tool vendors included deep bug hunting, equivalence checking, and deadlock hunting.
Given that not all deep abstractions were implemented or clean yet, it became harder for classic Formal proofs to find new, deeper bugs soon after the Alpha stage when the RTL features were complete so the deep bug hunting methodology that included advanced Formal needed to be implemented. As a result, a steady stream of bugs and counter examples were found throughout the rest of the project that could then be debugged and or explained away providing the team with much greater confidence in the design.
Throughout the project, Formal gained mindshare and acceptance within the larger designer team resulting in deeper engagement and adoption. Formal also replaced simulation as the primary verification flow for floating point designs, and a strong Formal team of engineers who are well-versed in state-of-the-art formal techniques and methodologies is set to tackle the next design.
But with all the success, there are some lessons that were learned as Vikram pointed out. First and foremost, follow the bugs. Focus debug resources on areas of the design where initial bugs are found. Increasing the debug bandwidth will result in finding more bugs much faster. Sufficient and sustained attention on high-value end-to-end checkers will yield greater bug results.
What else pays off he says is rigorous grounds-up planning. Invest the time and resources to prepare a “formal” Formal testplan from the beginning and get designers involved in the process early. Preparing the testplan at the beginning pays for itself by the end – in ensuring the design is micro-architectured to be friendly to formal verification as well as to ensure all important end-to-end checkers are part of the plan.
The interface property development effort was underestimated and with better debug/maintenance co-ownership, overhead could have been greatly reduced. Getting designers more involved in bring-up and mitigating interface complexity through aggressively over-constraining and interface constraint prioritization would have helped.
In the end, these lessons learned from even Arm’s successful deployment of Formal sign-off will further strengthen and cement a Formal strategy for future projects within Arm and your organization.
If you’d like to know more about how to develop Formal leadership within your engineering organization and how that can result in an effective verification with increased confidence in sign-off and ROI, you can read more about Formal Program Leadership by clicking the button below:
Formal Verification Program Leadership
Hi Vigyan, very interesting engagement indeed. I'm all for the rigorous formal verification planning part. One area that may result in greater ROI is refining the unsat core based coverage. I can show you some better results from our tool nexttime:) -Jun