This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

[non-Reordering Device memory] Is IMPLEMENTATION DEFINED SIZE set in hardware or in software?

The "Armv8 Architecture Reference Manual" says the following about the non-Reordering attribute for Device memory:

"For all memory types with the non-Reordering attribute, the order of memory accesses arriving at a single peripheral of IMPLEMENTATION DEFINED size, as defined by the peripheral, must be the same order that occurs in a simple sequential execution of the program."

Who sets IMPLEMENTATION DEFINED SIZE? Is it set by the hardware engineer who designs the processor, or can IMPLEMENTATION DEFINED SIZE be configured by software, for example, when software sets up the page tables? Could someone point me to a section of the "Armv8 Architecture Reference Manual" that clarifies this?

Parents
  • Who sets IMPLEMENTATION DEFINED SIZE?

    The answer is in the statement you quoted from the manual - '... single peripheral of IMPLEMENTATION DEFINED size, as defined by the peripheral, ...'.

    A peripheral, for instance, may not tolerate any IO, targetting its MMIO registers, of size other than 32-bits. 

    I think the statement says that if a programmer attempts an IO of an invalid size, the non-Reordering guarantee becomes void. That makes sense, because any IO of a larger size, if not out-right rejected, may be broken down into multiple pieces, and that causes ordering problems between the pieces (what is the correct sequence of considering the pieces?). Similarly any IOs of smaller sizes, if not rejected similarly, may have to be coalesced, and that destroys the original ordering in which the smaller IOs arrived.


    As an example, the gicv2 manual says, 

    "All registers support 32-bit word accesses with the access type defined in Table 4-1 on page 4-75 and Table 4-2 on page 4-76. In addition, the GICD_IPRIORITYRn, GICD_ITARGETSRn, GICD_CPENDSGIRn, and GICD_SPENDSGIRn registers support byte accesses. Whether any halfword register accesses are permitted is IMPLEMENTATION DEFINED."

Reply
  • Who sets IMPLEMENTATION DEFINED SIZE?

    The answer is in the statement you quoted from the manual - '... single peripheral of IMPLEMENTATION DEFINED size, as defined by the peripheral, ...'.

    A peripheral, for instance, may not tolerate any IO, targetting its MMIO registers, of size other than 32-bits. 

    I think the statement says that if a programmer attempts an IO of an invalid size, the non-Reordering guarantee becomes void. That makes sense, because any IO of a larger size, if not out-right rejected, may be broken down into multiple pieces, and that causes ordering problems between the pieces (what is the correct sequence of considering the pieces?). Similarly any IOs of smaller sizes, if not rejected similarly, may have to be coalesced, and that destroys the original ordering in which the smaller IOs arrived.


    As an example, the gicv2 manual says, 

    "All registers support 32-bit word accesses with the access type defined in Table 4-1 on page 4-75 and Table 4-2 on page 4-76. In addition, the GICD_IPRIORITYRn, GICD_ITARGETSRn, GICD_CPENDSGIRn, and GICD_SPENDSGIRn registers support byte accesses. Whether any halfword register accesses are permitted is IMPLEMENTATION DEFINED."

Children