This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Is debugging in big-endian possible?

Either I'm doing something wrong or debugging in big-endian mode is totally broken and has been for a while.

I'm using a MacBook Pro M1 Max. I've tried GCC 10.3 thru 12.2 and all exhibit the same strange behaviour. I'm open to trying older versions if it's known they work better, but I don't want to just keep randomly stabbing in the dark.

The target is a Cortex-A8 (specifically the AM3358).

The code is compiled with both the -mbe8 and -mbig-endian flags. Anything less does not work at all. But working is generous.

I can run code and stop it. Starting and stopping has no effect, I can do this indefinitely. This appears to work fine. The weirdness happens when I set breakpoints or step thru code. Basically sometimes they work and sometimes the CPU will run off into la-la land. When the breakpoint does land, it seldom continues correctly, sometimes crashing outright or jumping around RAM randomly.

My best guess as to what's happening when looking at the (wrong) disassembly output is that it's actually executing the endian-swapped version of the opcode when breakpoints are set. This could only happen if the debugger is rewriting memory which seems odd to me -- as if it's not even trying to use hardware breakpoints.

I double checked under Ozone and break points and stepping are fine, but it too doesn't work. It won't run at all with the -mbe8 flag set (only -mbig-endian) and then all the literal pools get byte-swapped so that doesn't work either. So there's clearly some disconnect as to "how this is supposed to work."

Maybe I'm doing it wrong? So far I've just resorted to printf debugging which works, but it's not efficient. Getting GDB working right would be really nice. So any tips would be appreciated.