Hello, I'm using an stm32f103C8 and trying to produce a precise delay funcion. I'm using a HSE of 72Mhz so a I supposed the clock cycle is equal to 13.8ns. The delay function is simply a while(count--) where count has the specific number of times for looping to gain the requested time. I made an hypothese that while(count--) takes 3 cycles to execute. So when I need to make 1µs delay I put 24 in count.
But I found that I make 4us with count =24 that means 4 times the desired time. Please can you help identify this problem?
Change your controller if it doesn't have a TIMER.
"Despite the capability of most timers, we still see people totally ignoring them, trying to do everything in software - including basic delays as counted number of iterations of a busy loop." - Per Westermark
Hello Sir, Do you suggest that I merge to Timer or systick? Did you encounter problems with software delay before and can you please tell me what are the causes of this problem
I used dessambly in keil while debuggin witch is by the way a pain in the head, and I found that while(count--) has 3 assembly instructions each one takes one cycle
Any assumption of the timing of HLL statements is likely to be seriously flawed You see the reason, why i prefer Hardware TIMERS
Timers are designed on-chip hardware peripherals and hence very powerful. They generate precise time delays (needless to say, if configured properly).
Use whichever meets your requirements!
SysTick, as the name suggests, is specifically intended for providing a System timing "Tick" to software.
Other timers tend to have far more flexibility for many more applications - so it may be a "waste" to use one where SysTick would do...
Only you know the specific requirements & constraints of your particular project - so only you can decide.
So an instruction takes one clock cycle? But wouldn't at least one of the instructions need to perform a jump depending on if a condition is true or false? Might not that jump result in a pipeline stall? Have you taken such pipeline stall into account?
Next thing - you do count your cycles and write a perfect software loop directly in assembler. You then implement interrupt-driven serial communication. Who tells you how many extra machine cycles gets lost every time that UART ISR gets trigged? Wouldn't your software-only delay leak a bit of time for every interrupt happening?
What happens instead, if you use a hardware timer that is sitting there cool, calm and collected and counts time without caring the slightest what your program happens to do right now? So zero UART interrupts or 1000 UART interrupts, that timer will still tick exactly the same number of ticks. If configured so every tick takes 1us, then 1000 ticks will then be exactly 1ms and one million ticks will be exactly 1 second.
Would it be too optimal to have your busy loop stay avay from trying to guestimate number of iterations to run? Instead constantly compute t1 - t0 where t1 is the current timer tick value and t0 is the timer tick value when you entered the delay function. Or busy-looping looking at a volatile flag, and have a timer interrupt set that flag when the timer "rings".
Assembler instructions (or maybe _NOP() intrinsics or similar) are perfect for generating extremely short delays when a minimum delay time is needed. Like when toggling a pin and then require the pin to be high for _at least_ 1us before toggling it again or maybe toggle another pin. A sequence of 10 _NOP() can guarantee a minimum delay, making sure hardware accesses have proper setup and hold times. But for longer delays, or delays where absolute time (not just minimum time) is required, there really aren't any good alternatives to the problem. Software loops will always fail in the general case, because they can't handle designs that uses interrupts.
Indeed I'm using USB to communicate with pc, so maybe the usb interrupts affects the software delay. I'm more encouraged to use timers or systick I'll tell you the result as soon as I finish
The important thing here is that you can almost always use a timer for creating delays without losing the ability of it to solve other problems too. A free-running timer can be used for polled delays and still constantly reprogram match registers on every match interrupt to let the match registers generate zero-jitter pin toggling. And the same timer could be both a PWM device and a fixed-frequency "uptime" generator.
Best of all? Most timers are quite easy to program, despite their capabilities.