Hi Martin,
Thanks for your reply. I would like to understand if there is any logic behind arriving at this 4kb boundary.
32 bit address can of course address 4G locations.
Regards,
Chandan
I've a software background, not hardware design, but I believe it is a trade-off between ease of address resolution and mapping. The smaller the "block" you pick, the more bits you have to look at in order to work out what slave to send the access to. Not all bus technologies will use 4KB. APB for example I believe uses 1KB.
As Martin Weidmann says, it's a compromise between the number of address bits that need decoding and the minimum space that needs to be allocated to each slave.
As examples, you could say a protocol requires a minimum of 1MB per slave, so a lot of wasted address space in most slave designs, but the decoder only has to decode 12 address bits (simpler combinatorial logic), or an opposite extreme, a minimum of 4 bytes per slave, so no wasted address space in each slave, but the decoder then has to decode 30 address bits (not great for combinatorial timing).
So 4KB for AXI is a compromise, not too much "wasted" address space in small slaves, and not too many address lines to decode.
1KB or 4KB isn't going to be an issue for the number of slaves possible on the bus (4M or 1M), so it is just address bits for decoding against possible wasted space in each minimum slave footprint that is the compromise decision.
1KB was what was specified for AHB (I don't think APB specifies any minimum, instead it just takes the minimum defined by whatever bus is driving the APB bridge), and the reason why this was increased to 4KB in AXI would be that AXI is a newer protocol than AHB, and data bus widths in common use have grown wider between when AHB and AXI were defined, so if looking at maximum bursts of 16 transfers, 4KB allows more flexibility for typical width maximum length bursts within one slave region, before the master has to look at ending one burst and starting a new one.
JD
Thanks mweidmann and @jd_ for your replies.
So I understand that 4Gb address space can be divided into a number of slaves and also that all 32 bits of address of course need not be
used.
Shorter answer this time, "yes".
However if you don't use all 4Gb of the address space, you should put a "default slave" in your system that will be selected whenever the address might stray into the unused portion of the 4Gb space.
This "default slave" can then drive HREADY and HRESP back to the master, usually to signal an ERROR if the master has tried to read or write to those unused locations, or OKAY if the master is just signalling an IDLE transfer.
what is the significance of using 12-bit address decoder ? why not greater than 12-bit address decoder?
how specifically it is related with slave size?
When I mentioned a 12-bit decoder in my example from 3 years ago I was describing an example of where the minimum slave size might be specified as 1Mbyte. So the system address decoder only needs to look at the 12 MSBs of the 32-bit address bus, because the 20 LSBs will be used to decode accesses WITHIN the 1Mbyte slave.
In the same description I also then described an example system where the minimum slave size was 4 bytes, requiring the system address decoder to decode 30 address MSBs.
I was trying to show that the minimum slave size chosen affects how complex the system address decoder has to be. Too small a minimum slave size and you have a ridiculously complex decoder, too large a minimum slave size and you do have a simple decoder, but fewer slaves supported (or more address space wasted to support physically smaller slaves).
As with all examples you need to read the full description to understand the context of the example.
So the AHB protocols decided on 1Kbyte minimum slave sizes, and AXI chose 4Kbytes, those being seen at the time as the best compromises between address decoder complexity and typical minimum slave sizes to avoid wasting too much address space.