This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

GIC-600 Multichip operation in Linux Kernel

Hello everyone!

I am a developer working on the BSP for an ARM-based chip. The chip I am currently developing is configured in a multi-die environment, with one GIC chip on each of two dies. Each die contains 4 cores, and there is an address offset of 0x2000000000 between the dies.

The software stack is as follows (excluding the components between chipboot and TF-A):

  • Firmware: TF-A
  • Bootloader: U-Boot
  • OS: Kernel 6.1

My questions are as follows:

  1. Looking at the GIC kernel driver code (drivers/irqchip/irq-gic-v3.c), it appears to be structured for a single instance. Is there a way to port GIC multichip support without modifying the driver?

  2. Is there a reason why GIC multichip functionality hasn't been considered in the kernel? Were there any attempts to change the code to support a multi-instance structure, or is there any related history?

Thanks you!

  • The kernel is taking the expected approach.  The intent is that all the cores running the same instance of Linux are connected to the same "architectural" GIC.  That means to software it looks like a single GIC (obeying the GICv3/4 spec), even if the hardware implementation might be spread across multiple blocks.

    So, in the system, the GIC-600 logic might be physical split across two chips, but to software it looks like a single GICv3 IRI.  With a single GICv3 Distributor and a GIC Redistributor per PE.  The kernel level driver shouldn't need to know which bits of the GIC functionality/state are implemented by which block on which chip.

  • thanks for your answer!


    When examining ARM's Neoverse multichip implementation, it appears that all GICs are mapped to the same address.
    However, there are cases, such as with NVIDIA errata T241-FABRIC-4, where different addresses are used. From the perspective of adopting a kernel environment, does having different addresses per chip in a GIC multichip implementation constitute a SoC errata?

  • I don't know the specifics of the Nvidia case, so can't comment.  The way the spec is written is that there is a single GICD register frame, regardless of what the physical implementation looks like.  If a design aliased the GICD frame to multiple locations, that's not necessarily non-compliant.  A generic drive (unaware of the aliasing) would just know about and use one copy of GICD.  

    Where it could get interesting is scenarios around things like power management and/or hot swapping.