Hello,
The way APB transfer is defined, on cycle:
1. sets SEL=1, sets control signals (address, data, write...etc)
2. sets ENABLE=1, keep control signals static
3. assuming device was ready to accept the data ready=1, sets SEL=0, ENABLE=0
4. sets SEL=1, sets new control signals (address, data, write...etc)
5. Jump to cycle#2 and continue from there
The way the standard sequences the events, it will take 3 cycles for a single transaction.
What benefit does it give to add an extra cycle between SEL and ENABLE signals? Why could not they get asserted on the same cycle and that would make a single transfer within one cycle only (assuming there is no back-pressure). If the subordinate cannot process the data within a cycle it has a way to extent the cycle by keeping the ready at low.
What benefits does it give to push SEL=0 in cycle #3? Why could not I keep it active for back to back transfers?
Thanks in advance,
Khach
Simple answer, #3 will only be required if there is no next transfer. You can have back to back transfers.
Have a look at the APB spec, where figure 4-1 shows a path on the right from ACCESS to SETUP, with no requirement you go through IDLE.
Thanks for the response. This is not completely addressing my question. I understand the FSM the way it is. I am asking why it is the way it is.
Let's say we just remove SETUP state from it. SEL and ENABLE will be activated together without any cycle between them.The new FSM for example, can have HOLD state if PREADY=0. However if PREADY=1 and there is new transfer coming the operations can be happening back to back. With current FSM, we add a NOP cycle in between of transfers. I always followed the spec and all worked, but now I just want to understand what benefit does it provide adding a dummy cycle in between of transfers?
Does my question make any sense?
I see what you are focussing on, but your original question implied that step 3 was mandatory which resulted in the 3 cycles for a transfer. This is incorrect, which was why I focussed on that in my reply, stating that an APB transfer can complete every 2 PCLK cycles. Step 3 is not necessary in back to back transfers.
The reason for the 2 cycle minimum is that APB was aimed at supporting very simple interfaces, with SRAM type timing where you have a 2 step access with the address and control information in the first step, and then the data transfer in the second. So the APB "setup" phase is when the transfer request parameters would be sampled, and then the APB "access" phase is when the requested data transfer is performed.
You could have a bus protocol where everything happens all in one clock cycle, but this implies lots of combinatorial logic paths to decode the PADDR address, determine if it is a read or write transfer, and then allow the write or read data to be written in or read from the combinatorially accessed target (not as easy to constrain and synthesise).
We want to avoid combinatorial paths, so the APB 2 cycle timing allows you to sample the "setup" request, and then complete the data transfer on the "access" phase.
Note that you can still use combinatorial logic for the APB interface, where PSEL=1 and PENABLE=0 tells the peripheral what will be requested, and then PSEL=1 and PENABLE=1 is the latch enable control for writes or the output enable for reads.
But simplicity of interface implementation is the main driver for APB, and the 2 cycle timing gives you the simplest minimal logic requirement.
Thanks again. That mostly makes sense. Why does the interface require the address to be constant for 2 cycles too? Why cant we capture it during setup (as the spec says) and then do not care about it in access state. This is how the synchronous SRAM would operate, is not it?
You can capture it in the setup phase, that's how I'd expect you to use it so you know what data transfer is required in the access phase.
As for why the address and other controls are held constant during the access phase, as this isn't a pipelined bus (back to the simplicity argument) there isn't any functional reason NOT to keep the address and control signals constant for the duration of the access, and there might be a peripheral design that prefers the signals to still be valid during the data transfer, so why change them ?