This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

What does "the architecture permits caching of GPT information in a TLB" mean?

Hello, everyone, I am reading chapter D9 "The Granule Protection Check Mechanism" in A-profile architecture, and I have get some question here:

  1. I_PZSYC says "For implementations that choose to do so for area or performance reasons, the architecture permits caching of GPT information in a TLB.". I can understand setting up a cache similar to the TLB for GPT lookup acceleration, but is it appropriate to put the GPT directly into the TLB, which seems to save a table but essentially destroys the TLB's original purpose of translating VA and PA?
  2. What new instructions have been added to the ARM for GPT table processing, other than TLBI PAALLOS, etc.?
Parents
  • but is it appropriate to put the GPT directly into the TLB, which seems to save a table but essentially destroys the TLB's original purpose of translating VA and PA?

    I don't think I agree.  The purpose of TLBs is to make look-ups faster and more efficient.  In a system with Stage 1, Stage 2 and GPC you could imagine three TLB-like structures each associated with one kind of look-up.  Or, you could imagine a single TLB which stores the end-to-end result.  Or, you could imagine some hybrid of the two.  The architecture is written to allow any of these approaches, it's up to the micro-architecture designer to decide which gives the desired PPA trade-off.

    What new instructions have been added to the ARM for GPT table processing, other than TLBI PAALLOS, etc.?

    Try this page and CTRL+F for GPT, I think there are four:

    Arm A-profile Architecture Registers

Reply
  • but is it appropriate to put the GPT directly into the TLB, which seems to save a table but essentially destroys the TLB's original purpose of translating VA and PA?

    I don't think I agree.  The purpose of TLBs is to make look-ups faster and more efficient.  In a system with Stage 1, Stage 2 and GPC you could imagine three TLB-like structures each associated with one kind of look-up.  Or, you could imagine a single TLB which stores the end-to-end result.  Or, you could imagine some hybrid of the two.  The architecture is written to allow any of these approaches, it's up to the micro-architecture designer to decide which gives the desired PPA trade-off.

    What new instructions have been added to the ARM for GPT table processing, other than TLBI PAALLOS, etc.?

    Try this page and CTRL+F for GPT, I think there are four:

    Arm A-profile Architecture Registers

Children