Hello All,
in the CMSIS there is a framework for UART communication. However I have to know in advance how many characters to receive.
I would have expected that UART driver write to kind of circular buffer permanently.
Because usually I do not know WHEN there is communication and HOW MANY data will be transmitted. I assume that when I abort the receive function with timeout and start again the receive function that in the mean time I might loose data on the hardware. Because if the receive function is not active no DATA AVAILABLE event will be handled.
In my real live projects I only use the UART transmit functions and reimplement my custom receive functions.
How does CMSIS address this issue? What is the best practice here?
Best regards.
Adib.
Hello Thomas,
Thanks for sharing your thoughts.
Actually there are 2(two) disadvantages in the current implementation.
1. the application is not informed instantly but eighter by timeout or by complete result 2. even you try to quickly restart rx procces there might by short outages where you don't receive
In our current projects we use (STM32 Cubemx): - DMA receive with ringbuffer and - application polling (could be done via osEvent from ISR)
We could use ISR based interrupt with signalling the application by osEvent but this will generate high load compared to DMA rx. however our used solution is not ideal.
Regards,
Adib. --
With a standard FIFO implementation you normally also get a bit of delay - since the hardware will normally not produce an interrupt until data stops arriving or the FIFO has reached the report watermark.
Not too many implementations needs faster response than that - and if they need, then they normally have the UART produce an interrupt for every received character. And then either lets the main loop poll a queue or have the interrupt set a signal. But situations where you do need a quick response for every received character normally have quite reasonable baudrates.
What is not acceptable, is if the driver layer has periods/situations where characters can be lost because the ISR doesn't know where to store received characters. Or maybe a DMA transfer doesn't have double-buffering so it can switch to a secondary buffer while the main application processes the data in the previous buffer.
Have you verified that the CMSIS driver doesn't have some buffer capability at the driver layer - or can make use of a hardware FIFO - between the function calls?
Hello Per,
> Have you verified that the CMSIS driver doesn't have some buffer capability at the driver layer - or can make use of a hardware FIFO - between the function calls?
Actually the driver has not its own rx-buffer.
The only function where you inform the driver about buffer space for receive is: ARM_USART_Receive()
According to the current specification the buffer responsibility goes back from the driver to the application whenever a timeout or rx_compplete occour. After the driver has given back responsibility to the application, the application might RESTART receive. but in that short time, the driver has disabled the Rx functionality of the hardware.
This is how the CMSIS is designed.
I've never understood the perceived requirement to use CMSIS.
If it can be used and does the job, then fine.
If it doesn't do something needed, write an alternative. It's certainly not difficult to write a UART driver from scratch. Then, the only person you can complain to when something doesn't work is yourself.
The reason I did ask about if the CMSIS implementation could make use of any FIFO or similar between the calls is that I prefer lots of control - so I use own code as often as possible, and have stayed away from all CMSIS functionality.
It's often very quick to port code between processors and I get the same API for the business logic. Own code means I can always take advantage of any extra hardware features. And I can do it the same day I see the need, instead of having to wait and hope for a driver-layer update.
The biggest advantage with own code is that it forces me to read the datasheet and evaluate each individual configuration bit to see what nice bonus functionality I can find and make use of. With a pre-cooked driver layer, it's easy to forget to read the processor documentation and figure out exactly what limits the hardware has.
Quite a lot of the hardware tends to be quite easy to control until reaching the Linux-class of processors.
"The biggest advantage with own code is that it forces me to read the datasheet and ..."
Exactly so and I've always done the same.
Actually, I agree with all you wrote there.
Unfortunately, in my experience I'd say that it's not a very common point of view; and for various reasons. Just yesterday a new manager was forced on me. He comes from a Windows/Microsoft background and thinks embedded systems must be developed in the same way. Why develop a component when you can just drop an existing one from someone else into a project? Heated discussions ahead.