Hi, I'm working with a LPC1758 processor and an AT45 dataflash. The dataflash communicate with the processor by SSP1 and dma.
In my first attempt: I want to read the status register from the dataflash. The specific command is a single byte 0xD7 - no addr bytes or dummy bytes have to be transmitted to the dataflash.
I setup the corresponding tx dma channel
#define tx_datasize 1 /* at45-command-size */ pHw->DMACCControl = (tx_datasize & 0x0FFF)|(0x00 << 12) |(0x00 << 15)|(0x00 << 18)|(0x00 << 21)|(1 << 26)|0x80000000;
and start the transfer.
Unfortunately the dma interrupt handler isn't called. The error seems to be due to the "tx_datasize". If I set the "tx_datasize" to 2 the dma interrupt handler is called.
I couldn't find any information in the user manual that told me that I've to increase the tx_datasize....
best regards Lars
I've changed my dma channel to
pHw->DMACCControl = (tx_datasize & 0x0FFF)|(0x02 << 12) |(0x02 << 15)|(0x00 << 18)|(0x00 << 21)|(1 << 26)|0x80000000;
and now I transmit the status register request byte 0xD7.
Now, the AT45 device should responde with a 2byte message. Therefore I tried to establish another dma channel in the dma irq handler.
#define DMA_SSP1_TX 2 #define DMA_SSP1_RX 3 void DMA_IRQHandler(void) { unsigned int state; state = LPC_GPDMA->DMACIntTCStat; if ( state ) { LPC_GPDMA->DMACIntTCClear = state; if ( state & (0x01<<1) ) { //start a new P2M dma transaction to get the status bytes from the AT45 DMA_InitChannel(3, P2M); LPC_GPDMACH1->DMACCConfig = 0x0C001|(0x00<<6)|(DMA_SSP1_RX<<1)|(pDma->mode<<11); } } state = LPC_GPDMA->DMACIntErrStat; if ( state ) LPC_GPDMA->DMACIntErrClr = state; }
Unfortunately I didn't get any other dma irq response....
My channel is now
#define rx_datasize 2 void DMA_InitChannel() { LPC_GPDMA->DMACIntTCClear = 0x01<<3; LPC_GPDMA->DMACIntErrClr = 0x01<<3; LPC_GPDMACH3->DMACCControl = 0; LPC_GPDMACH3->DMACCConfig = 0; LPC_GPDMACH3->DMACCLLI = 0; //DMACCSrcAddr + DMACCDestAddr already set pHw->DMACCControl = (rx_datasize & 0x0FFF)|(0x02 << 12) |(0x02 << 15)|(0x00 << 18)|(0x00 << 21)|(1 << 27)|0x80000000; }
Is there something wrong in my code?
I think the burst size and transfer width settings are not correct - maybe someone else could help you further; cause I didn't know it by heart... but that could be the problem to your description...
I am not so sure about your situation.
At the memory addr SSP1_DMA_RX_DST I can see 0xFF 0xBE 0x88 - the last two bytes are the status bytes. I don't know where 0xFF is coming from.
However, it sounds like an echo of necessary dummy clocks. So maybe you should ignore the DMA part at this stage, and focus on the SSP part. Make sure that, your SSP communication without DMA is correct. DMA is for the next stage.
Hi John,
without using DMA, I can read and write successfully to the dataflash device. I'm not sure if I have to do anything for the SSP driver in the DMA interrupt handler. At the moment I only clear the dma interrupts.
The first problem I have is that the SSP clock is only running when I transmit bytes by dma to the SSP controller - M2P transfer. If I setup a P2M dma transfer channel, no clock is generated automatically by SSP, therefore no data are comming in. With this information, I have setup a M2P dma transfer channel with 3 bytes - only the first byte is essential and after that I send two dummy bytes 0x00, so that the clock is generated three bytes long.
The next problem is the clock signal. My M2P dma transfer with 3 bytes generates one byte clock cycles, a short break of 1µs, another byte clock cycle and so on... The dataflash needs a continuous clock for 3 * 8 clock cycles...
Do you know how I can acchieve that the clock signal is continous for 3 * 8 cycles????
Your memory does not care if there is any short break in sending out clock bits - the only way your memory can see time is by looking at the clock signal. When you freeze the clock, you freeze the time for the memory.
The only thing that matters is if each byte results in a toggle of the slave-select signal or not. Some devices wants a slave select for every byte. Some devices requires a full message to be sent during a single slave-select activation.
I don't think SPI supports peripherial-to-memory transfers, since SPI is always two-way and a SPI master can never receive any data without performing a write. Just that lots of slaves expects the 0xff for the dummy bytes.
Hi Per,
In my previous post, I forgot one important thing: the spi dataflash is running in spi mode 3. After setting COL=1 and CPHA=1 I can communicate with the dataflash (read & write) without using dma.
After that, I tried to establish the same using dma. Unfortunately without any success. The big difference between both attemps (with or without dma) is that without dma I do always poll the busy-bit in the status register. Therefore the clock contains always 8 cycles, a short break and so on. Using dma, the clock is running without any breaks.
Sending the same command to the dataflash with dma enabled, the flash do not send anything back to the processor.
When power is first applied to the device, or when recovering from a reset condition, the device will default to SPI Mode 3. In addition, the output pin (SO) will be in a high impedance state, and a high-to-low transition on the CS pin will be required to start a valid instruction. The SPI mode (Mode 3 or Mode 0) will be automatically selected on every falling edge of CS by sampling the inactive clock state.
This is from the documentation from the dataflash. I'm not sure if I understand it right: at startup the device is running in spi mode 3 and when will the device switch to mode 0? My chip-select signal is made by a GPIO not by the SSP interface. So I set this pin to low and after that the transfer is starting.
Well, it is possible to run too fast for a device - so it is possible to have a device that sometimes requires a pause.
But have you scoped your communication and verified the sequence of the toggling of all signals for a full 3-byte transfer? Normally, it would be enough that the pin toggle sequence is identical - extra pauses shouldn't normally matter. But there can be very specific requirements about a minimum prepare time between slave select activation until you start the transfer, to make sure that the device is ready to actually start and pick up the individual bits.
Another thing - some people sometimes do manage to run the wrong SPI mode but just happens to get something to work because of accidental "luck" in the timing between the clock phase and the slave select. It's also possible to have enough signal delay time that it can seem to work ok with wrong phase of the clock because the signal delay gives the slave just about enough time to put out the data to send back to the master.
On a scope, you should see a very specific time difference between slave-select changes and clock line changes. And ignoring some extra pauses in the transfer, your pin state changes should totally match the datasheet signalling diagram.
One thing here: when a slave takes a command and then after decoding the command starts to send a response, some devices just might require an actual pause so they have time to figure out what the question was and then have time to insert the answer in the transmit register before the first bit of the answer starts to be clocked out on the MISO line. Memory chips and similar that has a hard-coded SPI logic normally can do without any extra pause. Processor-based slaves may often require a bit of a pause to let the slave processor think before it switches from "input mode" to "output mode" - when running at a high clock frequency, there is very little time between the sampling of the last master command bit until the slave needs to send out the first response bit.
Hi Per, now it is working. I changed CPOL and CPHA back to 0.
The only disadvantage is now that I always receive 0xFFs depending on the command size I tx to the dataflash. For example: if I send the byte message
0x0B 0x00 0x00 0x01 to the dataflash I receive 0xFF 0xFF 0xFF 0xFF by the rx dma channel.
Do you know if this behaviour is correct?
/* SSP1 TX and RX */ #define SSP1_DMA_TX_SRC 0x2007CA00 //memory addr #define SSP1_DMA_TX_DST (LPC_SSP1_BASE + 0x08) //ssp1 data register for tx and rx #define SSP1_DMA_RX_SRC (LPC_SSP1_BASE + 0x08) //ssp1 data register for tx and rx #define SSP1_DMA_RX_DST 0x2007CB00 //dma-tx-channel LPC_GPDMACH0->DMACCSrcAddr = SSP1_DMA_TX_SRC; LPC_GPDMACH0->DMACCDestAddr = SSP1_DMA_TX_DST; //dma-rx-channel LPC_GPDMACH1->DMACCSrcAddr = SSP1_DMA_RX_SRC; LPC_GPDMACH1->DMACCDestAddr = SSP1_DMA_RX_DST;
I think the problem is that if I tx a byte to the dataflash I also receive this byte as 0xFF by the dma, because dma-tx-channel and dma-rx-channel use the same ssp peripheral register.
The second problem is that I can only maintain the SPI clock during tx data bytes by the dma-tx-channel to the dataflash. This means I always have to tx the necessary command bytes + dummy or fill bytes 0x00 to get some information back from the dataflash....
Well, SPI is always full duplex.
Whenever the master sends a byte, it is also receiving a byte.
But lots of SPI protocols are semi-duplex. I.e. the master sends a command while the slave sends "null" bytes. Then the master sends "null" bytes while letting the slave send the actual response.
0xFF received is the default when the MISO signal is in idle state - which it is when no slave is selected or when the slave is busy receiving a command so it will know what it is expected to respond with.
So a normal SPI transfer where a 3 byte command is sent to a device and a 5 byte response is expected normally happens like:
Master sends
m1 m2 m3 ff ff ff ff ff
Master receives
ff ff ff s1 s2 s3 s4 s5
So the master must make a transfer that is long enough to cover both the transmit of the command and the reception of the following answer. It can normally throw away as many bytes as the command was long - first then will it start to receive valid response data.
And as I noted earlier: 0xff is normally the expected "filler" data on SPI - not 0x00 - unless the datasheet happens to specify something else.
Per, thanks for your explanations!
One more thing about this subject. My DMA has a four-word-FIFO and the SSP interface has a 8 frame-FIFO for Tx and Rx.
//P2M transfer settings LPC_GPDMACH1->DMACCControl = (rx_size & 0x0FFF)|(0x01 << 12)|(0x01 << 15)|(0x00 << 18)|(0x02 << 21)|(1 << 27)|0x80000000;
The burst size for tx and rx is 8 bytes and I set the destination transfer width to word-width (32-bit). The source transfer width is byte-width (8-bit).
If I send a read command + dummy bytes which has a multiple to 4 -> everything is working as expected; which means I receive 4x 0xFF and 4x data bytes from the dataflash (in the correct order).
If the read command + dummy bytes has not a multiple to 4, the dma-rx-channel doesn't work as expected, because the dma doesn't use a single-byte-transfer if needed; the dma always transfers 4x bytes (according to the destination transfer width setting). If I set the destination transfer width to byte width, any transfer independent of how many bytes are received is working as expected.
In the datasheet of the processor I read that the SSP interface is able to transfer DMA single requests as well as DMA burst requests. I might be wrong, but I thought that DMA will transfer each request with the "destination transfer width settings" only if there are lesser bytes than the destination transfer width.