In a world where increasingly sensitive tasks take place online, cryptography is becoming a critical discipline. Whether it is dating, private banking, or filing taxes, people want reassurance that their data is safe and secure.
Look at side-channel attacks, which hit global prominence in 2018 with the discovery of the Meltdown and Spectre vulnerabilities. Spectre and Meltdown were the world’s introduction to side-channels. Through these, nefarious elements were able to use by-products of the processor’s behavior, such as electro-magnetic radiation or microarchitectural timing, to locate, reconstruct, and steal secret data.
Over the last couple of years, a team at Sorbonne Université in Paris, France has been using Arm’s Cortex-M3 processor source code to model what is really happening in the hardware at the micro-architectural level. They also look to verify how secure existing silicon really is in the face of side-channel vulnerabilities.
Here, the team share how the project came about; how micro-architectural leakage modeling works; and how they hope it helps make the world more secure…
Cryptography is a complicated field. A system design may appear secure, but the real-life implementation may not be – because the implementation details matter a lot.
You can, for example, work very hard to make sure that no one can discover a secret key - a string of numbers in a file for encoding or decoding cryptographic data. But the power consumption of a processor can actually reveal the operation it is performing, as well as the data that it manipulates with that operation. These vulnerabilities are known as side-channels. The risk here is that nefarious elements can observe the power consumption of the processor and guess the secret key. It is almost like a black art.
“A security sensitive device, like a president’s cell phone, will have lots of high-order masking. But while masking makes it harder for the attacker to guess what is happening, it does not solve the problem.”
Historically, people have tried to solve this problem by introducing techniques such as masking, where keys are always hidden among the noise of other random data. A security sensitive device, like a president’s cell phone, for example, will have lots of high-order masking. An observer will see a collection of data that is seemingly impossible to interpret.
But while masking makes it harder for the attacker to guess what is happening, it does not solve the problem.
To truly protect yourself, you must first understand where the leakages are coming from. Our joint project with Sorbonne is about analyzing and modelling leakages and being able to prove independence between secrets and power consumption. This then gives those implementing cryptographic algorithms a foundation for improved security.
It is obviously very important for companies to find ways to prove that their systems do not leak secret information.
Yet attackers can easily recover secrets, even with mask implementation, because no one has considered the hardware. As well as the companies providing silicon, you have those building software applications on top of it. These people do not necessarily have the details of what is on the silicon.
The goal of our work is to model leakage while considering the micro-architectural aspects, closing the gap between hardware and software. This enables us to create a model for leakage that can confirm a program is leakage-free, with greater certainty than has ever been possible before.
“If we could gain proper access to the processor’s source code, we would not have to guess. We could explain exactly what we saw, and get more accurate results.”
Researchers generally are not able to get inside processors, so they are left trying to reverse-engineer leakages. By observing certain cases, you can make guesses about, for example, the registers in the micro-architecture and what they are doing. Our logic was that if we could gain proper access to the processor’s source code, we would not have to guess. We could explain exactly what we saw, and get accurate results.
Four years ago, we had a PhD student who was working on a verification algorithm to prove that certain software cryptographic implementations were leakage-free. When she completed her PhD, we knew we wanted to continue her work, and go deeper by analyzing the leakage using micro-architectural features.
I would already collaborated with Arnaud: he was involved in the supervision of a PhD student who works now at Arm. This is how I found out I could request support through Arm Academic Access. So, I asked Arm for the microarchitecture source code for the Cortex-M3 processor. And they gave it to us.
We wanted to look at hardware components in the processor, such as the registers, to observe values that revealed the power consumption, and verify whether a mask implementation is leaking secrets in a particular register.
Arnaud, Quentin and I engaged in a close collaboration, related to a research project funded by the French Research Agency (ANR), with weekly meetings to discuss the project.
Our first step was to map all the elements of the processor, then try to isolate each one. We designed test vectors, small programs that allowed us to see if we could observe leakage in the transitions taking place in a particular component.
If Arm had not agreed to let us have the source code, this work would not have been possible.
Having the source code enabled us to understand the exact pipeline structure, and how the data are transferred to memory. If we could correlate certain transitions, we would know that a particular component leaks and we would put it in our model, which we would use to verify more complex codes. Once we had a clear view of the different components in the processors and how these components leak, we were able to build a model. We could then use it to analyze and verify masked cryptographic implementations to know if they were well masked or not – and whether an attacker could be stealing secrets.
Our main difficulty was that we encountered questions in, for example, the RTL Verilog, which describes how data is transformed as it is passed between registers. We were left trying to make schemes of the processor because we were not very familiar with Verilog and its complex code. We would find ourselves wondering whether things were correct, and whether some suspected activities were actually possible. We did not have anyone at Arm who could answer this question. It would also be difficult to find someone anywhere who could do so – which meant digging into the RTL code ourselves. Karine spent a lot of time looking into it, and we finally arrived at something that corresponded to the processor.
One of the key goals of the project is to execute Arm code on this model of the registers. We can then see where there are registers or other hardware elements for which there are transitions revealing secrets. When the protection was designed and built, no one knew the architecture. If leakages are found, the code can then be adapted – thanks to the precise information provided, including the location of the leakage in the core and the value that has been leaked. And the process can be iterated until no more leakage is detected in any of the modeled hardware elements in the core.
The work is not yet published, but is submitted and under review at a conference. We can see a lot of potential uses for the model. There is the modeling part that could be used, but also the verification part, which we hope can be used to make software more resistant.
Companies are acutely aware of side-channel leakages and trying to implement countermeasures, so our work is definitely of great interest to them.
This collaboration with real-world industry was important, and really interesting for us. And while we could have taken another processor, it would have been a lot more work. We are familiar with Arm processors, and we had the chips, and a card integrated with tools for measuring power. So having the Arm source code made things easier.
“The alternative is that people end up making code larger and more complex, to try to ensure that it is resistant to side-channel attacks. Yet even then it will not be secure, if they do not know what is happening in the processor.”
Working with Arnaud has been a great experience. He is really easy to talk to, and has a strong technical background. He always takes care to explain what he does regarding the code. And we get along really well.
I very much like working on side-channel analysis. It is incredibly motivating, and it does feel like you are helping to contribute something to making the world more secure – even if it is only a small part.
Our hope with this work is that companies will be able to have implementations that are really protected against leakage and these attacks – yet not too expensive.
The alternative is that people end up making code larger and more complex, to try to ensure that it is resistant to side-channel attacks. Yet even then it will not be secure, if they do not know what is happening in the processor.
We still have two years left of the project, and I hope our work with Arm will continue. We plan to first study how to efficiently automate the modelling process, in order to move towards other processors, in particular more complex ones.
An inexpensive yet effective software protection really would be an important step in making the world more secure: because when it's cheaper, cryptography suddenly becomes easier.