I have noticed intermittant data corruption while using the SPI on an LPC 2103:
If the master is generating the clock so the slave can provide data to the master and if a SPDR 'read delay' is not used in the SPI ISR prior to reading the data (and sending the next clock byte) the next data byte received is corrupted.
According to NXP (which took great care I see in describing the terminating conditions of each device):
"When a master, this bit is set at the end of the last cycle of the transfer. When a slave, this bit is set on the last data sampling edge of the SCK."
I read this as the following (crude drawings included):
If this is the last SCK pulse, the master sets its SPIF when the end of the last cycle is over:
---\ \ set \ | \| | SCK
If this is the last SCK pulse, the slave sets its SPIF on the last data sampling edge:
set | ---\ |\ | \ \ SCK
I have read the 2103 errata concerning the SPI if CPHA = 0 but the settings for these devices are CPOL = 0, CPHA = 1, so that is not the problem.
The SPSR is read prior to the SPDR (with delay in between the SR and data register accesses. If the delay is set to 8 iterations or more, there is no data corruption (delay via NOPs or equivalent).
The master SPI clock supports NXP requirement: "As a result, bit 0 must always be 0. The value of the register must also always be greater than or equal to 8."
Is it possible that this corruption is a result of the timing differences between the master and slave SPIF notifications or does this seem to be something more fundamental? Or has anyone else ever seen this?
Then I think you are making an incorrect assumption about intended use.
Any data the slave should send out at the start of a new transfer, should have been written to the shift register - or outbuffer or FIFO - way before.
That is also why some protocol use of SPI activates the slave select and then let the slave send one or more dummy bytes before starting to issue real data - to give the slave time to start emitting fresh data. Sending a byte first, gives the slave plenty of time to compute something and prepare for transmission.
That is also a big reason why many chips don't have support for automatically driven slave-select from the master. This allows the master to activate the slave select, indicating to the slave that it should prepare the first byte for a new transmission. The master can then, in software or using timers or similar, decide how long to wait until it starts the first transfer.
If the SPI slave is intended to always send current time on each transfer, then it can't insert the first byte of data on speculation. So the master must either activate the select signal with some margin, or the master must know that a four byte time stamp from the slave requires the master to send 5 bytes, where the first byte is just a dummy transfer.
In your case, you seem to be using CPHA=1. In that case, the SSEL signal will always go inactive between transfers. That makes sure that you get enough reaction time between two transfers.
By the way - do you need two SPI interfaces on the chip? If not, then I would recommend that you use SSP1 instead of SPI0. With FIFO support, it will automagically handle the read+write to keep the transfer ongoing. But same thing there - the slave must either prepare the first byte directly on SSEL activation, or even before SSEL. After that, it's enough to top up the outgoing FIFO and read out corresponding amounts of data from incomming FIFO.
Way before what - if it is an interrupt-driven transfer then the interrupt is the trigger for the next data byte. If there is no FIFO, then the mechanism for transfer can only be by this sequence. If the documentation is interpreted correctly, the slave has SCK trailing edge time to load the byte.
If the SPI slave is intended to always send current time on each transfer, then it can't insert the first byte of data on speculation
It would do so on a real-time event - the SPIF interrupt. There is no 'speculation' for an interrupt.
So the master must either activate the select signal with some margin, or the master must know that a four byte time stamp from the slave requires the master to send 5 bytes, where the first byte is just a dummy transfer.
I also dont buy into the 'preamble-type' approach to successful data transfer. If it came down to it, I would prefer to change the data to 16 bit and pad the leading or trailing bits as zeros to 'buy' time for the next transfer. Either way, the data transfer rate would be affected by whatever 'modification' is implemented.