This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Dynamic Vector Table in RAM...

I would like to have the option of relocating my vector table to RAM at runtime, in a library (ie if the code doesn't need to do this, it doesn't, and the RAM isn't allocated or is eliminated by section garbage collection during link.)

The actual relcation seems simple enough, but I'm having trouble figuring out how to get the most efficiently allocated chunk of RAM needed, because of the alignment requirements.

  1.  At runtime, I can calculate the desired alignment based on the size of the vector table or max IRQ number, and theoretically use aligned_alloc(aligment, size), but that insists that the size be a multiple of the alignment (why?  Seems unnecessary.  But not the subject here.), which wastes quite a bit of space on the CPUs I'm most interested in.  (SAMD21 - 256byte alignment, only needs 180byte vector table.  SAMD51 - 1024 byte alignment for a 612byte vector table.)
  2. I could use a global structure in .bss, with __attribute__((aligned(x)).  But essentially all of device include files I've looked at are nice C code with enums and/or structures for defining the maximum IRQ number or vector table, so the alignment value does't seem to be calculable by the pre-processor (which would be needed to use attributes.)
  3. The examples I've found online all seem to rely on putting the new vector table at the very start of RAM, using compiler features that I don't have (attribute((at(...)))) and/or that interefere with the omission I'd like to have.
  4. That seems to leave only a string of ugly preprocesor checks for specific processor types, which is less than desirable :-(

Any suggestions?

(using the ARM gcc distribution, and mostly interested in Microchip SAMD2x and SAMD5x chips.  But more general would be better!  The actual goal is to be able to redefine the handlers for those multi-function SERCOM peripherals (and similar), depending on their desired function.  I would prefer not to add an additional level of indirection.)