Has anybody come across a list of ARM & THUMB instructions that cause deviation from the linear instruction stream?
I've been trying to figure out gdb-stub single stepping using software interrupts, and in single stepping you need to find
the next instruction(s) where the next breakpoint instruction needs to be set.
There are three cases:
1) current instruction doesn't change the execution path. Next instruction is the next word.
2) current instruction is a jump. The operand defines the next instruction address
3) current instruction is conditional branch. One possible next instruction is the next word, the other possible
instruction address is defined by the operand. (That includes conditional add with PC as the target, and the like).
To implement single stepping, I need to tell those cases apart and figure out how to find out the possible branching address.
I could go through manuals of numerous processors instruction by instruction and maybe I'd be done within the next couple of years,
or I could find a list of instructions to check, or a paper that explains how to "decode" the instructions in a useful way.
Also, there doesn't seem to be lots of sources of ARM gdb servers or stubs around that use software breakpoints.
Reading/writing 400KByte/sec.
What kind of reading/writing are you talking about?
I remember when I had plans to make a computer based on 68030.
The plan was trashed by the fact that those days home made address decoding would have been so slow that it wasn't worth the while. I considered v22 PLAs and some FPGAs, but the delays were far too big. With mask-programmed gate array it would have been a beast, but VERY expensive. I guess a mask cost like $1 000 000 back then.
68030 could do memory accesses in synchronous nibble mode in 55 ns, I recall.
(The dynamical bus sizing was, as such, really exciting idea.)
turboscrew wrote: Reading/writing 400KByte/sec. What kind of reading/writing are you talking about?
turboscrew wrote:
On my Atari ST, I could reach 400KByte reading or writing per second as maximum (by using the movem.l instruction).
The main reason for this was most likely Atari's bus architecture.
Lucky me, any Cortex-M0, even if running only at 1 MHz, is faster.
I recall Atari ST was quite nice machine of it's time. 8088 wasn't that impressive either compared to any ARM.
I had to settle with Commodore 64 with the famous "washing machine processor" (6510).