This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Aduc 814 SPI with python

Hello, I now the ADUC 814 is very old, but if there are still people here to help me with SPI communication.

I will try to explain as much as possible with all details.

Firstly my block schematic :
https://user-images.githubusercontent.com/40359579/67375267-a9830380-f582-11e9-91c9-7113fc9ee1f4.png

Master : UM232H (python code) https://www.ftdichip.com/Support/Documents/DataSheets/Modules/DS_UM232H.pdf

Slave : Aduc 814 (c code, using uVision) https://www.analog.com/media/en/technical-documentation/data-sheets/ADUC814.pdf

My python code is using pyftdi https://eblot.github.io/pyftdi/api/spi.html

This is my python code : 

test_spi.txt
import sys
sys.path.append("C:/Users/aguillem/AppData/Local/Continuum/anaconda3/Lib/site-packages")
from pyftdi.spi import SpiController


spi = SpiController(cs_count=1)
spi.configure('ftdi://ftdi:232h:FT337Y88/1')
slave = spi.get_port(cs=0, freq=1000000, mode=3) # 0 = half duplex // 3 = full duplex
write_buf = b'\x00\x03\x04'
read_buf = slave.exchange(write_buf, duplex=True).tobytes()
print("write buf ¦{}¦  \nread buf  ¦{}¦".format(write_buf, read_buf))

This is my c code (I had to change to .txt to upload it) : 

main.txt
#include <ADUC814.H>
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <GLOBAL.H>
#include <FUNCTION.H>

/* SPI port interrupt service routine */
void spi_int () interrupt 7{
	recieved_byte = SPIDAT; // clear by reading 
	
	sent_byte = recieved_byte + 1;
	SPIDAT = sent_byte;
}


unsigned char enable_interrupts()
{
	EA = 1;                       // Enable General interrupts
	IEIP2 = 0x11;                 // Enable SPI interrupt with high priority
	return 0;
}

unsigned char init()
{
	/* Select core clock */
	PLLCON  = 0x04;         //16.77216 MHz (0x01 = 8.388608 MHz   | 0x03 = 2.0971 MHz)
	
	/* Configure SPI */
	SPICON = 0x28; 					// CPHA=0, CPOL=1, Slave			
	CFG814 = 0x01; 					// Enable SPI interface
	
	return 0;
}

//-----------------------------------------------------------------------------
//MAIN C function
//-----------------------------------------------------------------------------
void main (void)
{
	unsigned char chan_2_convert;
	
	// Initialize
	init();
	enable_interrupts();
	
	while (1)
		{
			
		}
}




And this is what I have on the oscilloscope:



And what I have on my python console :


write buf ¦b'\x00\x03\x04'¦
read buf ¦b'\x01\x00\x00'¦

Process returned 0 (0x0) execution time : 0.307 s
Press any key to continue . . .


So my problem is that I should have, or at least want to have is my code is wrong, the MOSI incremented by 1 on the MISO signal, and here there is only the first byte which is incremented.

And another things which is wrong is I have to run the python program 2 times before seeing the incremented signal, the first time he return all 0x00, and the second it return 0x00 +1

Clock form the python (Master) 2MHz, clock for the slave (C code) set to 1MHz, I tried to change both Slave and Master clock and either Master 2 or 4 time more or less than the slave but alwas the same behavior.

Someone knows what to do ?

  • Clock form the python (Master) 2MHz, clock for the slave (C code) set to 1MHz,

    That makes no sense, because there is no such thing as setting the clock of an SPI slave.  SPI slaves receive their clock from the master.

    So my problem is that I should have, or at least want to have is my code is wrong, the MOSI incremented by 1 on the MISO signal,

    You're sending a 24-bit train on MOSI.  Which of those is supposed to be the "signal" you want to increment? And where in the 24-bit train on MISO do you expect that to end up?

    Note that, at the speed you're running that device, there's virtually no chance it'll be able to serve one byte's reception interrupt before the next one is already supposed to be out on the line already.

  • By clock of the slave I meant by the PLLCON register it can run at 16MHz but I choose 2MHz. But there is no issue can be related on the Slave clock because as you said the Master will give the clock anyway.

    I just tried to modify the value of SPIDAT into the interrupt, if I write SPIDAT = 0x55, it returns it to the MISO, but If I write SPIDAT = 0x555555 it also returns 0x55, which I think he can't return more than 8 bits.

    I look at the datasheet and it's written at the page 44 / 72 :
    The data is transferred as byte-wide (8-bit) serial data, MSB first. SCLOCK (Serial Clock)

  • but I choose 2MHz

    Yet according to your earlier message, and the comments in the code, you chose about 1 MHz.  So which is it, actually?

    But there is no issue can be related on the Slave clock because as you said the Master will give the clock

    It's not quite that simple.  SPI slaves in microcontrollers do have limitations on how high an SPI clock frequency they can handle, in relation to their core clock.

    I just tried to modify the value of SPIDAT into the interrupt,

    What you write into SPIDAT will only show up on the line on the next byte's worth of clock bits, at the earliest.  And that's as in: the next byte after your CPU has actually managed to:

    • have its USART notice the incoming byte
    • trigger the interrupt service routine
    • read the byte from the register
    • perform arithmetic on it
    • write the new value to the output register
    • finish clocking out the byte that was previously written into the register.

    For such an antique, running at a measly 1 MHz, all those steps will quite certainly take longer to perform than the entire rest of the inbound telegram takes to fly right past it.  In other words: the earliest you can realistically expect the reply to the first byte of that 3-byte telegram to go onto the line is as the first byte of the next telegram.  Which, I think, is exactly what happens.

  • Sorry for my late replay 

    Yet according to your earlier message, and the comments in the code, you chose about 1 MHz.  So which is it, actually?

    Yes sorry it's 1MHz.

    It's not quite that simple.  SPI slaves in microcontrollers do have limitations on how high an SPI clock frequency they can handle, in relation to their core clock.

    Okay but if it can run up to 16MHz, it's probably fine to run it at 1MHz then.

    For such an antique, running at a measly 1 MHz, all those steps will quite certainly take longer to perform than the entire rest of the inbound telegram takes to fly right past it.  In other words: the earliest you can realistically expect the reply to the first byte of that 3-byte telegram to go onto the line is as the first byte of the next telegram.  Which, I think, is exactly what happens.

    I haven't understood everything about the reason why this happen and the solution you propose.

    That's mean 1MHz isn't is enough quick ?

    I Just tried with 10MHz from the CLK and the slave to 16MHz, and I got the same, tried with 2 bytes, the 1st is incremented well but after the second byte remain 0.

    And let the slave to 16MHz while putting the master CLK at 250KHz and no incrementation at all the difference where too big.

    And after put both clock at 250KHz and it work again, so the second byte isn't usable because of the clock speed.

  • That's mean 1MHz isn't is enough quick ?

    It means a core clock of 1 MHz is way too slow for the '51 CPU core in there to keep up with your SPI transactions.  Note that is the ancient, original 12-block '51 core.  I.e. it takes at least 12 core clock cycles to execute a single machine instruction.

    That means at 1 MHz core clock, there'll be only about 2 machine instructions possible in the time of that entire 24-bit, 1 MBaud SPI telegram.  The simulator clocks your interrupt handler at 137 microseconds ... that's about 4 entire 24-bit telegrams.

    the second byte isn't usable because of the clock speed.

    No.  It's unusable because of the mismatch between your expectations and the possibilities of the built-in CPU core.  Even at the full 16 MHz core clock, this core could not realistically finish with the interrupt handler betwen two bytes of that telegram.  At that speed, it's essentially a lottery which of the many bytes you send will happen to be lying in the SPIDAT when you read it, and also when the reply you write into SPIDAT may end up on the line.

    Why are you even sending 24-bit telegrams here?  The ADUC cannot possibly keep up with those.