In the AXI spec it is mentioned that , even in case of ERROR response, the slave needs to respond with the exact number of beats and indicate the response with each beat.
Now my question here is if the response is going to be ERROR(lets say SLVERR) why does Master or anyone care about the exact number of beats in the response.
For example in a timeout scenario, the slave should be allowed to send 1 response with RID, RLAST and RRESP= SLVERR and that should be enough to communicate to everyone in the path that particular transaction timeout or ERRORed.
Why is the protocol not made this way? and it is complicated that everyone tracking the timeout needs to remember the exact number of beats to transfer.
As the protocol states in section A3.4.4...
"In a read transaction, the slave can signal different responses for different transfers in a burst."
So just because one read transfer fails doesn't mean all the transfers in the transaction will fail. If the master is told exactly which reads fail, it can then chose to only re-attempt the failed accesses perhaps once the cause of the failure is corrected, or it could be that the master knew the nth transfer would fail (perhaps it is an unimplemented register location in a slave, but using a long transaction covering lots of adjacent addresses was simpler and more efficient on the bus).
Write responses are less informative in that only one response is returned after all AWLEN transfers have been completed, but even then knowing that the entire transaction has been completed could then allow the master to interrogate a fault status register in the slave to find out exactly which transfer(s) failed, so not requiring the re-issuing of the failed transaction.
So by ensuring the transaction is completed before handling any non-OKAY response, you leave the master design free to choose which transfers it possibly needs to repeat.
The AXI protocol is designed to ALWAYS complete the AxLEN indicated number of transfers (only assertion of reset would terminate any ongoing transaction), so masters and slaves must be designed to always perform the requested number of transfers. There is no concept of early burst termination like that seen in the AHB protocol.
It doesn't need to be complicated with everyone tracking the number of transfers completed; the transaction data source must track the number of transfers completed so that it can assert xLAST at the appropriate transfer, but the transaction data destination can just look at the incoming xLAST to see when the transaction is ending.
Is the following right? At some point in the read path, say a bus matrix, the ARLEN need to be remembered. Assume, for example, ARLEN = 7 (8 data expected). If, for some reason, after 2 read data the link is down. Then the bus matrix must send 6 dummy data with SLVERR; is this right?
View all questions in Embedded forum