Has anybody come across a list of ARM & THUMB instructions that cause deviation from the linear instruction stream?
I've been trying to figure out gdb-stub single stepping using software interrupts, and in single stepping you need to find
the next instruction(s) where the next breakpoint instruction needs to be set.
There are three cases:
1) current instruction doesn't change the execution path. Next instruction is the next word.
2) current instruction is a jump. The operand defines the next instruction address
3) current instruction is conditional branch. One possible next instruction is the next word, the other possible
instruction address is defined by the operand. (That includes conditional add with PC as the target, and the like).
To implement single stepping, I need to tell those cases apart and figure out how to find out the possible branching address.
I could go through manuals of numerous processors instruction by instruction and maybe I'd be done within the next couple of years,
or I could find a list of instructions to check, or a paper that explains how to "decode" the instructions in a useful way.
Also, there doesn't seem to be lots of sources of ARM gdb servers or stubs around that use software breakpoints.
Great work! I'm convinced that you'll get there soon.
It might be a good idea to put the instruction size in the table-entry I mentioned earlier.
You should probably also have in mind that Cortex-A can optionally be Big Endian. This does not mean that the instruction set changes, but I think it means that the load/store changes for 16/32/64 bit access.
-Thus if you load an instruction with a mask from memory, I believe you would be safe as long as you do not use two (or more) instructions to construct the 'immediate' values.
Eg. If you use MOV to load the value into a register and then use LDR to read from memory, you may get the value byte-reversed when using the LDR, but the value that MOV loaded would not be byte-reversed, thus there would be no match when you expect it to.
-So in this case, I believe loading from a table is the best solution.
I trust C-compiler knows its business.
But I probably have to use some kind of table. Depends on the complexity of the instruction handling needed.
Most C-compilers would store the 32-bit words in the literal pool, however, if turning on optimizing this part would break on Big Endian platforms.
I highly recommend the table. That would make things easier and also make sure it would work on any Endian machine.
To check at runtime whether your code is running on big or little endian, you could do this:
static const uint32_t isLittleEndian = 0x00000001;
if(*(const uint8_t *)&isLittleEndian)
{
// little
}
else
// big
On little endian, the low-byte would be first, on big endian, the lowbyte would be last, thus on big endian, the result would be 0, on little endian, the result would be 1.
-You could make it an inline function if necessary - or just a global.
I know. I once had to rewrite a endianness test for autotools (compilation-time) test, because we were cross-compiling and the default test needed to be run. Intel processor didn't run PowerPC code too well...
This is news to me, though:
"Most C-compilers would store the 32-bit words in the literal pool, however, if turning on optimizing this part would break on Big Endian platforms."
Actually, I better correct this. It only applies if you use the same binary on a multi-endian platform.
Imagine that your code is built for - say - Little Endian. It's now moved to a platform, where you do not know if the platform is running big or little endian. This can be switched by hardware; usually setting a pin high or low at boot.
Thus your program would need to determine the endianness at runtime.
If the masks and data constants are stored in table entries, they will match what you read from memory, no matter whether your load instruction swaps the data or not.
But MOVW and MOVT do not swap the data (here the data is fixed).
This means that the data would not be correct if your architecture's endianness does not match your binary. Thus you would need two binaries.
If you're running an operating system, such as Linux, then you would not have that problem, because it would most likely allow only loading .elf files where the endianness match the architecture.
Another caveat is when you load your data and bit-shift / mask out bits, you'll probably need to load byte-by-byte and insert the data into a 32-bit word and then do the comparison; again because the load instruction swaps the byte on loading from memory on one type of endianness compared to the other type of endianness.
A few days ago, I had to make some code, which I wrote for PowerPC (I'm still on PPC) work on an Intel-Mac, and I really got in trouble because it seems Apple changed the picture format for 32-bit ARGB (offscreen) pictures.
In addition, they made some changes to how caching works, and finally I had to fight against endian-problems on bit-shifting.
This was really a brain-twisting experience I won't recommend!
-I'm pretty convinced that there are still some bugs hidden in my code, because I ended up not knowing what I was doing, but it does work for now...
I think I have to try to get the thing together with a subset of the instructions first.
I've been going through the encoding of A1 (been doing it for a couple of days) and I still have quite some instructions to go through. There doesn't seem to be a single document that says it all - I've been reading 3 documents in parallel, and I still have done some guesswork too. The documents are "ARM® Cortex™-A Series, Version: 4.0, Programmer’s Guide",
"ARM® Architecture Reference Manual, ARMv7-A and ARMv7-R edition, Issue C.c" and "ARM Architecture Reference Manual, Issue I". Figuring out enough about all the instructions will take a couple of weeks still - probably longer than everything else together. Quite tiring and frustrating work.
Funny how some info seems to be dropped in updates. Like the bits PUNWL for LDC/LDC2/STC/STC2. I couldn't find the explanation in the ARMv7-A ARM anywhere, and the main instruction encoding table in ARM ARM could have been nice in ARMv7-A ARM too.
To not lose all I've done this far, I put my effort in the github. It compiles, but quite some code is still missing.
My struggle with the instructions is there in the file: instr.txt in case someone is interested.
The code is still "initial draft" so don't shoot me.
The repo is: turboscrew/rpi_stub · GitHub
It looks like figuring out the ARM ISA on bit level is becoming the most tedious and time consuming task.
When (if?)I get it figured out, I hope I still remember there was a project it was done for.
It doesn't help that aliases and pseudo instructions are treated in the document just like the 'native' instructions.
I think I just have to go through the instructions in the ARMv7-A ARM one by one and manually list all instructions and the bit patterns of all encodings in a text file for easier manipulation and sort them out there.
The HTML-pages are slow for that, and the copying works funny with PDFs.
The time estimate to finish the project just got fourfold (at least).
I know OpenOCD does single-stepping too. Perhaps this can be of some help to you ?
In the file cortex_a.c there is a breakpoint setting function and single stepping function, but they get the address as a parameter. I still haven't found where the address is decided, but I think it's somewhere there, because in the file /src/server/gdb_server.c the function fetch_packet implements the remote serial protocol - that's what I've been working on.
Looks very helpful. Thanks, jensbauer.
(there seems to be no 'helpful answer'-button, so I clicked 'correct answer' even if I'm not yet sure if this solves my problem. Odds look good though, so it can't go very wrong.