This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Deadlock accross multiple interconnects

Note: This was originally posted on 5th January 2011 at http://forums.arm.com

For single AXI matrix, the interconnect can ensure that deadlock cannot occur via CDAS , however, could you please give me some cases about deadlock occurring accross multiple interconnects and recommendations about how to solve it ? Thank you very much.
Parents
  • Note: This was originally posted on 21st February 2011 at http://forums.arm.com

    Nan,

    As this sort of question (and the answer to it) is probably very specific to the design of the interconnect, I'm hoping that you have also sent the same question direct to the IP vendor that supplied you with the interconnect as they could then give a specific answer.

    However if not, assuming this is either ARM's PL301 or NIC301 interconnect designs (the only ones I've had a play with), hopefully this helps answer your question.

    This multi-interconnect deadlock you are describing could occur because of the ordering requirements of the AXI protocol, where the CDAS logic ensures ordering is met within one interconnect, but where each individual interconnect is not aware of the possible reordering of master transactions by parallel paths through other interconnects.

    As an example using PL301, a master uses the same ID for 2 transactions, in the first interconnect these map to 2 different slave ports. If these 2 slave ports then drive master ports on a second interconnect, and now map on to the same destination slave port (i.e. the paths reconverge), the order these 2 transactions are seen at the final slave depend on a number of things, the "aribtration priorities" in the second interconnect, and any registering along either path. As both transactions now have unique IDs as a result of the first interconnect logic, there are no ordering rules as far as the second interconnect is concerned, so the slave might see the second issued transaction first.

    If these transactions are writes, the original master needs to send the first item of write data for the first issued address first, but the slave needs to receive the first item of write data for the second address first (the first received address). Similarly for reads, the master needs to receive all the read data for the first transaction first, but the slave might decide to send all the read data for the second transaction first (either because it saw that second transaction first, or because it is allowed to re-order read transaction responses when the IDs are unique).

    So in both read and write transactions with this simple structure you have a possible deadlock, where the master needs to complete a data transfer for one transaction, and the slave needs to complete a data transfer for the other transaction.

    There are other more complex examples where the 2 transactions reaching the second interconnect do not target the same slave port, but where other transactions going through this second interconnect then enforce ordering requirements on the original 2 transactions, but the above example is the simplest I can think of.

    The simple solution is to ensure that the first interconnect can only issue transactions to one slave port at a time, or to only allow a second port to be accessed if a different ID value is being used (for reads only). Both of these are CDAS options in ARM's NIC301 and PL301 products.

    NIC301 is also better in this area as you can design multiple interconnects (switches) within one NIC301 component, and all this potential deadlock detection is then checked for you.

    However this is probably too specific a response to work in a general forum like this, so I would suggest you contact your interconnect IP vendor for specific answers to that design.

    JD
Reply
  • Note: This was originally posted on 21st February 2011 at http://forums.arm.com

    Nan,

    As this sort of question (and the answer to it) is probably very specific to the design of the interconnect, I'm hoping that you have also sent the same question direct to the IP vendor that supplied you with the interconnect as they could then give a specific answer.

    However if not, assuming this is either ARM's PL301 or NIC301 interconnect designs (the only ones I've had a play with), hopefully this helps answer your question.

    This multi-interconnect deadlock you are describing could occur because of the ordering requirements of the AXI protocol, where the CDAS logic ensures ordering is met within one interconnect, but where each individual interconnect is not aware of the possible reordering of master transactions by parallel paths through other interconnects.

    As an example using PL301, a master uses the same ID for 2 transactions, in the first interconnect these map to 2 different slave ports. If these 2 slave ports then drive master ports on a second interconnect, and now map on to the same destination slave port (i.e. the paths reconverge), the order these 2 transactions are seen at the final slave depend on a number of things, the "aribtration priorities" in the second interconnect, and any registering along either path. As both transactions now have unique IDs as a result of the first interconnect logic, there are no ordering rules as far as the second interconnect is concerned, so the slave might see the second issued transaction first.

    If these transactions are writes, the original master needs to send the first item of write data for the first issued address first, but the slave needs to receive the first item of write data for the second address first (the first received address). Similarly for reads, the master needs to receive all the read data for the first transaction first, but the slave might decide to send all the read data for the second transaction first (either because it saw that second transaction first, or because it is allowed to re-order read transaction responses when the IDs are unique).

    So in both read and write transactions with this simple structure you have a possible deadlock, where the master needs to complete a data transfer for one transaction, and the slave needs to complete a data transfer for the other transaction.

    There are other more complex examples where the 2 transactions reaching the second interconnect do not target the same slave port, but where other transactions going through this second interconnect then enforce ordering requirements on the original 2 transactions, but the above example is the simplest I can think of.

    The simple solution is to ensure that the first interconnect can only issue transactions to one slave port at a time, or to only allow a second port to be accessed if a different ID value is being used (for reads only). Both of these are CDAS options in ARM's NIC301 and PL301 products.

    NIC301 is also better in this area as you can design multiple interconnects (switches) within one NIC301 component, and all this potential deadlock detection is then checked for you.

    However this is probably too specific a response to work in a general forum like this, so I would suggest you contact your interconnect IP vendor for specific answers to that design.

    JD
Children