We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hello everyone!
I am a developer working on the BSP for an ARM-based chip. The chip I am currently developing is configured in a multi-die environment, with one GIC chip on each of two dies. Each die contains 4 cores, and there is an address offset of 0x2000000000 between the dies.
The software stack is as follows (excluding the components between chipboot and TF-A):
My questions are as follows:
Looking at the GIC kernel driver code (drivers/irqchip/irq-gic-v3.c), it appears to be structured for a single instance. Is there a way to port GIC multichip support without modifying the driver?
Is there a reason why GIC multichip functionality hasn't been considered in the kernel? Were there any attempts to change the code to support a multi-instance structure, or is there any related history?
Thanks you!
thanks for your answer!
When examining ARM's Neoverse multichip implementation, it appears that all GICs are mapped to the same address.However, there are cases, such as with NVIDIA errata T241-FABRIC-4, where different addresses are used. From the perspective of adopting a kernel environment, does having different addresses per chip in a GIC multichip implementation constitute a SoC errata?
I don't know the specifics of the Nvidia case, so can't comment. The way the spec is written is that there is a single GICD register frame, regardless of what the physical implementation looks like. If a design aliased the GICD frame to multiple locations, that's not necessarily non-compliant. A generic drive (unaware of the aliasing) would just know about and use one copy of GICD.
Where it could get interesting is scenarios around things like power management and/or hot swapping.