I have some questions about correct use of the CMSIS DSP library call arm_fir_32. First, I'll provide some background about what I am doing and what the setup is.
I have a STM32F4 Discovery board, using IAR EWARM for programming. Just for testing purposes, I'm generating a low frequency test signal at 40Hz and feeding it into one of the ADC inputs. The signal is biased to swing from 0 to about 2.5Vpp. The signal has a low to moderate amount of broadband noise - but at this point I am not purposely mixing or introducing any other signals with it. There is a timer interrupt set to sample frequency of 2KHz, with a sampling buffer of 2048 samples.
I have already tested and am using the FFT function arm_cfft_f32, and can accurately determine (track) the frequency of the incoming signal when I change it at the source. This seems to be working well.
Now, I would like to use the arm_fir_32 filter. To do this, I started out reading the documentation from CMSIS on the function. To implement a FIR low pass, to generate the tap coefficients, I am using this website's only tool to do so.
I generated a 4th order filter, and set the sampling rate the same as my software, with a cutoff of 60Hz. I forced generation to 24 taps to be even. So the BLOCK_SIZE is 32, and the number of blocks is 1024/32 = 32.
Following the example from CMSIS on this function, I believe I've set up correctly. So the chain looks like this:
ADC --> FIR --> FFT
However, I'm not getting the result I would expect. The values returned from the FFT's output buffer are exponentially large (not this way if I comment out /circumvent the FIR calls). This leads me to believe I am missing a step. Do I need to normalize the values? I thought that because I input the rate into the FIR function setup, this wouldn't be required - but maybe this is incorrect.
Can someone please provide some insight or assistance as to what I am missing or doing incorrectly to apply the FIR processing?
(Hope I deleted that last post before you read it. Thought I might have found problem...but not so)
An update as to what I've also tried. I'm now setting the timer interrupt this way, using the prescaler to main divide and leaving the period at 1000. It works,
as long as the product is close to 82500. (my Discovery board is off a bit from the 84000 it should be)
I've found that these setting provide 0.5Hz resolution - although they should not. (24Hz bin at 48) It seems to be related to the buffer size. It updates at 2 seconds, which is
what would be expected given the sample frequency and buffer size:
#define SAMPLE_FREQ 1000
#define BUFFER_SIZE 2048
#define FFT_SIZE 1024
These setting behave quite differently. The 24 Hz bin is at 24, 1 second resolution, and the update rate is 1 second.
#define BUFFER_SIZE 1024
#define FFT_SIZE 512
Hope this additional info helps discover why the FFT is behaving this way...thanks again for looking into this.
NB: On my Discovery board, I installed the LSE crystal and caps to get sub-second time stamps so I could measure the processing intervals. That's the reason the RTC stuff is there (at this point), for nothing else.