I have some questions about correct use of the CMSIS DSP library call arm_fir_32. First, I'll provide some background about what I am doing and what the setup is.
I have a STM32F4 Discovery board, using IAR EWARM for programming. Just for testing purposes, I'm generating a low frequency test signal at 40Hz and feeding it into one of the ADC inputs. The signal is biased to swing from 0 to about 2.5Vpp. The signal has a low to moderate amount of broadband noise - but at this point I am not purposely mixing or introducing any other signals with it. There is a timer interrupt set to sample frequency of 2KHz, with a sampling buffer of 2048 samples.
I have already tested and am using the FFT function arm_cfft_f32, and can accurately determine (track) the frequency of the incoming signal when I change it at the source. This seems to be working well.
Now, I would like to use the arm_fir_32 filter. To do this, I started out reading the documentation from CMSIS on the function. To implement a FIR low pass, to generate the tap coefficients, I am using this website's only tool to do so.
I generated a 4th order filter, and set the sampling rate the same as my software, with a cutoff of 60Hz. I forced generation to 24 taps to be even. So the BLOCK_SIZE is 32, and the number of blocks is 1024/32 = 32.
Following the example from CMSIS on this function, I believe I've set up correctly. So the chain looks like this:
ADC --> FIR --> FFT
However, I'm not getting the result I would expect. The values returned from the FFT's output buffer are exponentially large (not this way if I comment out /circumvent the FIR calls). This leads me to believe I am missing a step. Do I need to normalize the values? I thought that because I input the rate into the FIR function setup, this wouldn't be required - but maybe this is incorrect.
Can someone please provide some insight or assistance as to what I am missing or doing incorrectly to apply the FIR processing?
The CMSIS FIR function does not have any unusual scaling that you need to follow. Since you are generating a lowpass filter you can easily determine if the filter is scaled properly by determining its DC response. Simply sum up the coefficients in the filter and the result should be close to 1.0. One thing to keep in mind is that you have a relatively short FIR filter. At 24 points with a sample rate of 2 kHz, your frequency resolution is 83 Hz (2000/24) and you thus won't be able to make changes in the frequency domain at a finer resolution than this.
Could you share your code and we'll take a quick look?
Please, accept my apology for my previous post, it was uncalled for. Just a bit impatient, I am.
Paul, I determined the error and your response helped me to do that. Somehow the FIR coefficients were getting trashed the way I was declaring them and thus resulting in all the trash I was seeing. Now, getting
values that I would have expected. I've created another set of filter coefficients. o to 45Hz, G = 1, 50Hz to 1KHz, G = 0. 1.5dB ripple, -80dB attenuation, 978 taps.
Could you please elaborate a bit further on your comment about the resolution - if I were to use that calculation, it would now be 2000/978 = 2.05. Are you saying this is the lowest resolution of frequency I can accurately ascertain after processing it through the FFT?
Why does the number of taps affect the resolution of the FFT which is another function? If so, how can I attain higher frequency resolution without changing sampling frequency, which is at least 40x higher that the highest frequency I am processing (50Hz) ?
NB - I should have added that I am trying to obtain very high frequency resolution, sub-hertz if possible.
Again, my apologies and I thank you for taking the time to answer my questions.
As requested here is sample code that I am using, in the interrupt handler, which is set to run at SAMPLE_FREQ:
if( TIM_GetITStatus(TIM6, TIM_IT_Update) )
volatile uint16_t uhADCxConvertedValue;
float32_t smplRate = 0;
if( sampleCount == 0 )
arm_fill_f32(0.0, ADC_Values, INPUT_SAMPLES);
uhADCxConvertedValue = ADC_get();
/* subtract the DC offset that was added as this is not a bipolar ADC */
ADC_Values[sampleCount] = ( ((uhADCxConvertedValue * VREF) / ADC_COUNTS) - 1.6);
if(sampleCount++ == INPUT_SAMPLES)
inputBufPtr = &ADC_Values;
outputBufPtr = &outputBuffer;
arm_fir_init_f32(&S, FILTER_TAP_NUM, (float32_t *)&filter_taps, &firStateF32, BLOCK_SIZE);
for(uint16_t i = 0; i < NUM_BLOCKS; i++)
arm_fir_f32(&S, inputBufPtr + (i * BLOCK_SIZE), outputBufPtr + (i * BLOCK_SIZE), BLOCK_SIZE);
/* Process the data through the CFFT/CIFFT module */
arm_cfft_f32(&arm_cfft_sR_f32_len512, outputBuffer, ifftFlag, doBitReverse);
/* Calculate the magnitude at each bin */
arm_cmplx_mag_f32(outputBuffer, cfftBuffer, fftSize);
/* get the maximum energy at index */
arm_max_f32( cfftBuffer, BLOCK_SIZE, &maxValue, &index );
printf("Max Value = %.1f, at index = %i\n", maxValue, index);
smplRate = (INPUT_SAMPLES / 1.0);
printf("frequency = %.1f\n", (float32_t)((index * SAMPLE_FREQ) / smplRate) );
sampleCount = 0;
/* Clear TIM6 Capture compare interrupt pending bit */
(Just a minor comment, before I go on vacation tomorrow, I hope you don't mind me chipping in).
Though I do not know much about FFTs, I will recommend clearning the pending bit in the beginning of your interrupt.
It's always good to clear it as early as possible, so you reduce the risk of missing out an interrupt.
It could affect your readings, depending on how fast your MCU is running.
Also, there is an unwritten rule, a no-no; do-not-do... Never call library functions (such as printf for instance) from an interrupt service routine.
Do not mind a all! I appreciate your comments. 100% agree on clearing the interrupt, have already changed that. The printf is totally for debug through SWO,
and yes it does impact the performance of the IH to a large degree, it will be removed after debugging.
Happy USA day, jensbauer.
Hi Dr. Beckmann,
Have you had the chance to look at the code I posted to see where things are going astray?
Right now I'm convinced that something weird is going on when I use a 1KHz sampling rate and an FFT of length 1024 (2048 samples). The bin values are not correct, and it does not appear to be a scaling issue. There is no uniformly divisible number that works out in the bins of interest.
I work for Paul at DSP Concepts on the CMSIS library. Would you mind emailing me a small demo program that illustrates your issue? I looked at the code you posted, but there seems to be some code that's probably not related to the problem, and also some important details missing. I would like to see the actual input data that causes the issue reported. Also, it might not be related at all, but would you please let me know what processor you are using (Cortex-M0,1,3,4,4 with fpu), and little or big endian? All of this would greatly speed up my attempt to find and fix the problem.
Thank you for taking the time to answer my questions. Being relatively new to working with DSP, I am probably misunderstanding a fundamental here with CMSIS functionality, or perhaps just DSP.
I seem to get it working at one frequency, but not the one I want (500Hz), Bizarre.
A simple objective, with slow data, no fancy buffering or DMA of data needed. I have an eternity to process between interrupts. I am sampling using a timer interrupt set by the define SAMPLING FREQUENCY. When the interrupt reaches the BUFFER_SIZE, I will then process the FFT. For the sake of simplicity, let's say I want a sampling rate of 500Hz - 10x the Nyquist rate should be more than adequate.
So, can I take a step back, ask you to forget the code I posted and instead put this question: Given the conditions in the last paragraph, can you please tell me what I should set these variables to, given the 500Hz sampling rate, to get a resolution of 0.5Hz:
When using the call: arm_fir_f32
The P-M filter coefficients were generated by ScopeFIR at Fs = 500Hz
Using IAR EWARM 7.1
Your specific questions:
Using an STM32F407VG processor (ST's Discovery PCB MB997C)
Using this library: arm_cortexM4lf_math.lib
Assuming endian is little since I downloaded the library already built and I believe that's the default.
Please let me know if there's any other info you need.
My apologies for delayed response to this. I must not have this forum set up to email me when I get a response. I assumed you would email me and didn't come back to this forum until today.
Block size should be irrelevant. It should just affect the performance of the function (generally larger block sizes are more efficient).
fft size seems like 1024 should be sufficiently close to 0.5Hz (0.48828125Hz)
buffer size just set it to fft size * 2 (for complex numbers)
I'll respond to your other thread in just a minute
Thank you for responding, please accept my apologies for the last post.
I think all issues are now fixed. It appears that the interrupt rate was incorrect for anything other than a 2000Hz rate. To get more accuracy I used a prescaler value and then divided down by that, and can now sample at the required rate.
Now just one other question to ask of you. In the presence of noise, how can one be certain that the signal of interest is not noise? In other words, how could the signal be validated?
I'm not an expert in solving DSP questions (more of a firmware coder), but I'll toss in my 2 cents. You could look at the SNR of the signal in question, and if it's large enough that should indicate it's not a part of the background noise. You could track the signal over multiple interrupts to see if it is still there and still at the same frequency. You could check that only 1 or 2 bins have very large values, as opposed to a plateau of large values.
That's about all I can think of. There's not much you can do if your noise produces a strong sinusoid at the frequency of interest aside from find a way to filter it out before you measure it.
After looking at the CMSIS documentation, there's an example there that uses the SNR function of the CMSIS DSP library:
** Compare the generated output against the reference output computed
** in MATLAB.
** ------------------------------------------------------------------- */
snr = arm_snr_f32(&refOutput, &testOutput, TEST_LENGTH_SAMPLES);
if (snr < SNR_THRESHOLD_F32)
status = ARM_MATH_TEST_FAILURE;
status = ARM_MATH_SUCCESS;
Dan, since you indicated that your area isn't DSP, is it possible that Dr.Beckmann could answer this question:
In the CMSIS documentation example for SNR, the input signal AND the test signal is generated by Matlab. In the real world, how is a reference signal generated when receiving signals that have an amplitude = unknown? I am thinking that the answer isn't likely in one sentence. This is a whole side of DSP processing I have not done before, so if you can point me to some reading (website, document, book) that would provide some insight as to how to correctly use the CMSIS function I would be grateful.
To distinguish between signal and noise you need a good model of each. This is very common in other applications like communications in which you want to distinguish between 0's and 1's. It's usually not 100% reliable since noise can sometimes appear as signal and vice versa.
Still something bizarre going on with this - sorry to open this question again.
I've got the timer interrupt generating at these intervals:
TIM_TimeBaseStructure.TIM_Prescaler = 50;
TIM_TimeBaseStructure.TIM_Period = 1605; //timer interrupts at 1ms = 1Khz
use above, or OR use:
TIM_TimeBaseStructure.TIM_Period = 800; //timer interrupts at close to 500us = 2KHz
The main loop runs until loopCtr == BUFFER_SIZE then runs the FFT.
Below, using these defines gives me bins with 0.5Hz resolution. (A frequency of 25 Hz processed has high bin at #50)
#define SAMPLE_FREQ 1000
#define BUFFER_SIZE 2048
#define FFT_SIZE 1024
So why do these defines give me just 1 Hz resolution? (A frequency of 25 Hz processed has high bin at #25)
#define SAMPLE_FREQ 2000
#define BUFFER_SIZE 2048
The only thing that changes between those defines is the SAMPLE_FREQ. Also, I am at a loss to understand why the first set provides 0.5Hz resolution at all.
Because according to DSP theory, Fs / FFTsize = bin resolution.
What am I missing here?
No, I don't expect you to debug the program. I'm trying to figure out where the issue is, nothing more.
I am sending you the source files for the project directory in an RAR file. I'm confident that when you look at the code, you'll see there is no other code,
it's as simple as I have described, and hopefully you can tell me why the CMSIS call using those defines in question is not returning expected values.
I agree it should be Fs / FFTsize = bin resolution.
So in your first case
resolution should be 0.9765625Hz, with the 25Hz signal showing up mostly in bins 25 and 26
and the second case
resolution should be 1.953125Hz, with the 25Hz signal showing up mostly in bins 12 and 13.
I'm not sure why you're getting roughly double that. I have an inkling this might have more to do with your other code than CMSIS. I feel that it might be appropriate at this point to let you know that I'm here to support you with CMSIS related questions, not to debug any bug in your code you come across. If you're confident that it's CMSIS that is misbehaving in some way, or you're unsure how to correctly use a CMSIS function, please do let me know and I will work through it with you to resolve the issue. As always, I need as much information as possible to debug it; preferably a working example program that involves only CMSIS and nothing else.
edited which bins it should show up in because I fail at basic math
(Hope I deleted that last post before you read it. Thought I might have found problem...but not so)
An update as to what I've also tried. I'm now setting the timer interrupt this way, using the prescaler to main divide and leaving the period at 1000. It works,
as long as the product is close to 82500. (my Discovery board is off a bit from the 84000 it should be)
I've found that these setting provide 0.5Hz resolution - although they should not. (24Hz bin at 48) It seems to be related to the buffer size. It updates at 2 seconds, which is
what would be expected given the sample frequency and buffer size:
#define SAMPLE_FREQ 1000
These setting behave quite differently. The 24 Hz bin is at 24, 1 second resolution, and the update rate is 1 second.
#define BUFFER_SIZE 1024
#define FFT_SIZE 512
Hope this additional info helps discover why the FFT is behaving this way...thanks again for looking into this.
NB: On my Discovery board, I installed the LSE crystal and caps to get sub-second time stamps so I could measure the processing intervals. That's the reason the RTC stuff is there (at this point), for nothing else.
Forget it then...
Clearly, there's an issue with the FFT at specific rates, buffer sizes and FFT's. Would be nice if it was debugged before releasing. I'll be sure to post here and let you know how SIGLIB works.
Thanks for your help.
View all questions in Arm Development Studio forum