Hello everyone,
I am using the STM32F407ZET6, amongst other things, for a frequency measurement.
The incoming sine signal is converted to a (positive only) square wave signal via hardware, and is then fed to a GPIO port, configured as external interrupt on rising edge. To measure its cycle duration, I am starting a microsecond-increment timer on the first rising edge and disable its counter after, let’s say, 5 rising edges. On every timer interrupt a variable is incremented to keep count of the passed microseconds. Eventually I am using the mean value over those 5 cycles to determine the signal frequency (I can’t use the input capture function of the timers because the square-signal lies on an unfitting port.)
Now this works very well. I get exact results that fit their purpose by all means. But only up to the point where I add several delays (to control a LCD display) before the frequency measurement. For some reason this seems to interfere with the correct timer execution. The more or larger delay functions have been placed, the greater the error during the frequency measurement.
I’ve done the delays with the same timer I use for the frequency measurement (timer 3) and also by simple usage of NOP loops. Both create said error. I feel like, for some reason, it messes with the system ticks.
Any inputs on this topic would be greatly appreciated.
Kind regards, Alain