Does ARM assume that all Cortex-M microcontrollers are little-endian?

While researching what I could assume on bit-field layout in the ARM Application Binary Interface, I came across core_cm7.h.

It contains the following code:

    /**
      \brief  Union type to access the Application Program Status Register (APSR).
     */
    typedef union
    {
      struct
      {
        uint32_t _reserved0:16;              /*!< bit:  0..15  Reserved */
        uint32_t GE:4;                       /*!< bit: 16..19  Greater than or Equal flags */
        uint32_t _reserved1:7;               /*!< bit: 20..26  Reserved */
        uint32_t Q:1;                        /*!< bit:     27  Saturation condition flag */
        uint32_t V:1;                        /*!< bit:     28  Overflow condition code flag */
        uint32_t C:1;                        /*!< bit:     29  Carry condition code flag */
        uint32_t Z:1;                        /*!< bit:     30  Zero condition code flag */
        uint32_t N:1;                        /*!< bit:     31  Negative condition code flag */
      } b;                                   /*!< Structure used for bit  access */
      uint32_t w;                            /*!< Type      used for word access */
    } APSR_Type;
    
    /* APSR Register Definitions */
    #define APSR_N_Pos                         31U                                            /*!< APSR: N Position */
    #define APSR_N_Msk                         (1UL << APSR_N_Pos)                            /*!< APSR: N Mask */


I intentionally include the first bit mask after the union, because it confirms what the ARM®v7-M Architecture Reference Manual specifies: the N-bit, aka "Negative condition code flag", is always the most significant bit of that register, regardless of endianness. This is also quite clear from the comment for the corresponding bit-field.

ARM probably assumes that any compiler that compiles that code for a Cortex-M7 target fulfills the Procedure Call Standard for the Arm® Architecture, which seems like a  reasonable assumption.

That ABI specifies (among others):

> A sequence of bit-fields is laid out in the order declared [...].

This means that bit-field N in the `struct` above will always be laid out as the last bit of the register in memory.

However, if the processor is big-endian, bit-field N in the `struct` above will in that case be the least significant bit of the register, i.e. bit 0, not bit 31!

I find neither comments nor any compile-time flag that would take care of this issue in core_cm7.h.

In fact, I have just found another piece of ARM code that seems to confirm my analysis:

    #ifndef __BIG_ENDIAN // bitfield layout of APSR is sensitive to endianness
    typedef union
    {
        struct
        {
            int mode:5;
            int T:1;
            int F:1;
            int I:1;
            int _dnm:19;
            int Q:1;
            int V:1;
            int C:1;
            int Z:1;
            int N:1;
        } b;
        unsigned int word;
    } PSR;
    #else /* __BIG_ENDIAN */
    typedef union
    {
        struct
        {
            int N:1;
            int Z:1;
            int C:1;
            int V:1;
            int Q:1;
            int _dnm:19;
            int I:1;
            int F:1;
            int T:1;
            int mode:5;
        } b;
        unsigned int word;
    } PSR;
    #endif /* __BIG_ENDIAN */


That's obviously for a different core (not Cortex, I'm guessing), but it confirms the principle.

So does ARM just assume that there will never be any big-endian Cortex-M processor, or am I missing something?

More questions in this forum