I have some questions about correct use of the CMSIS DSP library call arm_fir_32. First, I'll provide some background about what I am doing and what the setup is.
I have a STM32F4 Discovery board, using IAR EWARM for programming. Just for testing purposes, I'm generating a low frequency test signal at 40Hz and feeding it into one of the ADC inputs. The signal is biased to swing from 0 to about 2.5Vpp. The signal has a low to moderate amount of broadband noise - but at this point I am not purposely mixing or introducing any other signals with it. There is a timer interrupt set to sample frequency of 2KHz, with a sampling buffer of 2048 samples.
I have already tested and am using the FFT function arm_cfft_f32, and can accurately determine (track) the frequency of the incoming signal when I change it at the source. This seems to be working well.
Now, I would like to use the arm_fir_32 filter. To do this, I started out reading the documentation from CMSIS on the function. To implement a FIR low pass, to generate the tap coefficients, I am using this website's only tool to do so.
I generated a 4th order filter, and set the sampling rate the same as my software, with a cutoff of 60Hz. I forced generation to 24 taps to be even. So the BLOCK_SIZE is 32, and the number of blocks is 1024/32 = 32.
Following the example from CMSIS on this function, I believe I've set up correctly. So the chain looks like this:
ADC --> FIR --> FFT
However, I'm not getting the result I would expect. The values returned from the FFT's output buffer are exponentially large (not this way if I comment out /circumvent the FIR calls). This leads me to believe I am missing a step. Do I need to normalize the values? I thought that because I input the rate into the FIR function setup, this wouldn't be required - but maybe this is incorrect.
Can someone please provide some insight or assistance as to what I am missing or doing incorrectly to apply the FIR processing?
I only require resolution at low frequencies: 10Hz to 50Hz
I'm confident the FIR has adequate taps for the filtering required. I'm still trying to understand what the impact of putting in a 2 Hz resolution filter ahead of the FFT, which has a (2000/2048) resolution, does to the samples. Does it mean that the distribution of error (of 2Hz) should fall around some Gaussian mean? i.e., 30 Hz could be 31, or 29, but likely 30.4 or 29.6 for approximately 70% of the samples?
After doing a bit more research, I've found that it might be an improvement to use a Windowing function. Do you agree? It appears that CMSIS-DSP has no Window functions that I can see. From what I understand, the Hann would be better to eliminate or reduce the buffer endpoints as they may cause noise that would create the jitter. Can I implement the Window by processing each sample as: y[x]=0.5-0.5cos(2pi n / N-1) ? If so, I've tried this, but it is making the results bad. Should the Window be after the cfft?
One further question: the CMSIS function arm_cfft_f32 returns a buffer which you can then search for a peak sample n (n > 0) and then calculate the frequency. What function is available that will give me the indices of the first N peaks? Or is this a function left for the user to write?
Again, I appreciate your generous time and assistance.
My comment regarding the "2 Hz resolution of the FIR filter" just refers to how accurately you can specify its frequency response. It can be just about any shape (e.g., low pass at 40 Hz, low pass at 80 Hz). However you can't have the filter have gain g0 at 10 Hz and then a widely different gain at 11 Hz. I think its safe for what you are doing.
Its a good idea to use a window (Hanning is a good choice) when computing FFTs. If you don't do this you'll find that the result depends heavily on what is happening at the edges of the signal. If you only care about low frequencies and you want very good resolution, I would recommend taking a long data set at the 2 kHz sample rate, filter by a low pass filter with cutoff frequency 100 Hz and then decimating by a factor of 10. This would reduce your signal length by a factor of 10 and reduce the sample rate to 200 Hz. If you computed an FFT then, say with 1024 points, then the frequency resolution would now be 200/1024 = 0.2 which is much better than what you had before.
Sorry, no function in CMSIS which finds the N largest peaks. But that shouldn't be too hard to write.
OK, thank you for that, no more questions. 8--)
I think it's working well enough now, where I can be happy with consistent 1Hz resolution.
I am processing low frequency signals (for now, just a sine wave) under 50 Hz and using a low pass filter with Fc at 100Hz. Running samples through a simple Parks-McClellan FIR with 351 taps.
the filter was created for Fs = 1000Hz.
Input signal = 27Hz.
Fs = 1000Hz
Buffer size = 4096
N (fft length) = 2048
According to your post, the resolution should be: 1000/2048 = 0.49Hz
The largest bin should be the fundamental, which is at 24 in this case. After returning from the CMSIS call to arm_cmplx_mag_f32, here are the bins around it:
22: cfftBuffer = 42.423: cfftBuffer = 96.924: cfftBuffer = 470.825: cfftBuffer = 218.826: cfftBuffer = 88.3
If: f = bin# x (Fs / N) , then: f = 24 * (1000/2048) = 11.71Hz (should be close to 27Hz)
That doesn't work out anywhere near what I am sending into the ADC, or the calculation. Where am I going astray?
Thanks again for looking at my questions...
Your analysis looks correct. The frequency resolution is 0.49 Hz per FFT bin. The input sine wave at 27 Hz should appear at 27/0.49 = about bin 55. Something else must be going on. Could you post your code so that I can take a quick look at it?
Hi Dr. Beckmann,
The code is posted below. It runs in the Timer interrupt, defined by:
TIM_TimeBaseStructure.TIM_Period = ( (RCC_Clocks.PCLK2_Frequency / SAMPLE_FREQ) - 1);
Here's where the issue seems to be. It works perfectly with frequency precision to 0.5Hz, or better, if I use these defines:
#define SAMPLE_FREQ (2000)
#define INPUT_SAMPLES (4096)
Here are the bins at/around maximum when running with the above defines:
cfftBuffer = 49.8
cfftBuffer = 167.3
cfftBuffer = 1024.9
cfftBuffer = 503.9
cfftBuffer = 226.6
cfftBuffer = 142.8
cfftBuffer = 96.9
Frequency calc: 55 * (2000/4096) = 26.9 (input generator at 27Hz)
If I use the defines as below in the sample code, it does not compute the correct frequency; the maximum bins do not seem to be in the correct place. Here are the bins around the maximum:
(sorry about the previous post, I pasted the incorrect bins from another run)
cfftBuffer = 27.7
cfftBuffer = 13.5
cfftBuffer = 28.4
cfftBuffer = 563.1
cfftBuffer = 156.8
cfftBuffer = 91.2
cfftBuffer = 67.1
12 * (1000/2048) = 5.9 (input generator at 27Hz)
BTW - I did try the decimation method as you suggested in your post (just used a 4 point moving average filter) but got similar bad results.
Thank you again for assisting me with this, very appreciated. (just hope I'm not missing something obvious)
/***** CODE BELOW ***/
#define SAMPLE_FREQ (uint32_t)1000
#define INPUT_SAMPLES (uint32_t)2048
#define FFT_SIZE (uint32_t)(INPUT_SAMPLES/2)
#define MAX_FREQ (float32_t)50.0
/* max energy bin index. Divide sample rate by sample size,
then multiply by max frequency and cast to int */
#define MAX_ENERGY_INDEX (uint32_t)((INPUT_SAMPLES/SAMPLE_FREQ)*MAX_FREQ)
/* Timer 6 Interrupt Handler */
if( TIM_GetITStatus(TIM6, TIM_IT_Update) )
volatile uint16_t uhADCxConvertedValue;
float32_t smplRate = 0;
uhADCxConvertedValue = ADC_get();
//subtract small offset from sigGen
ADC_Values[sampleCount] = ( ((uhADCxConvertedValue * VREF) / ADC_COUNTS) - 0.58);
if(sampleCount++ == INPUT_SAMPLES)
inputBufPtr = &ADC_Values;
outputBufPtr = &outputBuffer;
arm_fir_init_f32(&S, FILTER_TAP_NUM, (float32_t *)&filter_taps, &firStateF32, BLOCK_SIZE);
for(uint16_t i = 0; i < NUM_BLOCKS; i++)
arm_fir_f32(&S, inputBufPtr + (i * BLOCK_SIZE), outputBufPtr + (i * BLOCK_SIZE), BLOCK_SIZE);
if( FFT_SIZE == 1024 )
arm_cfft_f32(&arm_cfft_sR_f32_len1024, outputBuffer, ifftFlag, doBitReverse);
arm_cfft_f32(&arm_cfft_sR_f32_len2048, outputBuffer, ifftFlag, doBitReverse);
/* Calculate the magnitude at each bin */
arm_cmplx_mag_f32(outputBuffer, cfftBuffer, FFT_SIZE);
cfftBuffer = 0;
maxValue = 0;
//find maximum bin (CMSIS call doesn't appear to work correctly!)
for(uint16_t i = 0; i < MAX_ENERGY_INDEX; i++)
if( cfftBuffer[i] > maxValue )
maxValue = cfftBuffer[i];
index = i;
//compute frequency at max bin
smplRate = (INPUT_SAMPLES / 1.0);
float32_t frequency = ((index * SAMPLE_FREQ) / smplRate);
printf("Frequency = %.1f, bin energy[%i] = %.1f\n", frequency, index, maxValue );
sampleCount = 0;
/* Clear TIM6 Capture compare interrupt pending bit */
/* end interrupt handler */
(Just a minor comment, before I go on vacation tomorrow, I hope you don't mind me chipping in).
Though I do not know much about FFTs, I will recommend clearning the pending bit in the beginning of your interrupt.
It's always good to clear it as early as possible, so you reduce the risk of missing out an interrupt.
It could affect your readings, depending on how fast your MCU is running.
Also, there is an unwritten rule, a no-no; do-not-do... Never call library functions (such as printf for instance) from an interrupt service routine.
Do not mind a all! I appreciate your comments. 100% agree on clearing the interrupt, have already changed that. The printf is totally for debug through SWO,
and yes it does impact the performance of the IH to a large degree, it will be removed after debugging.
Happy USA day, jensbauer.
Have you had the chance to look at the code I posted to see where things are going astray?
Right now I'm convinced that something weird is going on when I use a 1KHz sampling rate and an FFT of length 1024 (2048 samples). The bin values are not correct, and it does not appear to be a scaling issue. There is no uniformly divisible number that works out in the bins of interest.
I work for Paul at DSP Concepts on the CMSIS library. Would you mind emailing me a small demo program that illustrates your issue? I looked at the code you posted, but there seems to be some code that's probably not related to the problem, and also some important details missing. I would like to see the actual input data that causes the issue reported. Also, it might not be related at all, but would you please let me know what processor you are using (Cortex-M0,1,3,4,4 with fpu), and little or big endian? All of this would greatly speed up my attempt to find and fix the problem.
Thank you for taking the time to answer my questions. Being relatively new to working with DSP, I am probably misunderstanding a fundamental here with CMSIS functionality, or perhaps just DSP.
I seem to get it working at one frequency, but not the one I want (500Hz), Bizarre.
A simple objective, with slow data, no fancy buffering or DMA of data needed. I have an eternity to process between interrupts. I am sampling using a timer interrupt set by the define SAMPLING FREQUENCY. When the interrupt reaches the BUFFER_SIZE, I will then process the FFT. For the sake of simplicity, let's say I want a sampling rate of 500Hz - 10x the Nyquist rate should be more than adequate.
So, can I take a step back, ask you to forget the code I posted and instead put this question: Given the conditions in the last paragraph, can you please tell me what I should set these variables to, given the 500Hz sampling rate, to get a resolution of 0.5Hz:
When using the call: arm_fir_f32
The P-M filter coefficients were generated by ScopeFIR at Fs = 500Hz
Using IAR EWARM 7.1
Your specific questions:
Using an STM32F407VG processor (ST's Discovery PCB MB997C)
Using this library: arm_cortexM4lf_math.lib
Assuming endian is little since I downloaded the library already built and I believe that's the default.
Please let me know if there's any other info you need.
My apologies for delayed response to this. I must not have this forum set up to email me when I get a response. I assumed you would email me and didn't come back to this forum until today.
Block size should be irrelevant. It should just affect the performance of the function (generally larger block sizes are more efficient).
fft size seems like 1024 should be sufficiently close to 0.5Hz (0.48828125Hz)
buffer size just set it to fft size * 2 (for complex numbers)
I'll respond to your other thread in just a minute
Thank you for responding, please accept my apologies for the last post.
I think all issues are now fixed. It appears that the interrupt rate was incorrect for anything other than a 2000Hz rate. To get more accuracy I used a prescaler value and then divided down by that, and can now sample at the required rate.
Now just one other question to ask of you. In the presence of noise, how can one be certain that the signal of interest is not noise? In other words, how could the signal be validated?
I'm not an expert in solving DSP questions (more of a firmware coder), but I'll toss in my 2 cents. You could look at the SNR of the signal in question, and if it's large enough that should indicate it's not a part of the background noise. You could track the signal over multiple interrupts to see if it is still there and still at the same frequency. You could check that only 1 or 2 bins have very large values, as opposed to a plateau of large values.
That's about all I can think of. There's not much you can do if your noise produces a strong sinusoid at the frequency of interest aside from find a way to filter it out before you measure it.
After looking at the CMSIS documentation, there's an example there that uses the SNR function of the CMSIS DSP library:
** Compare the generated output against the reference output computed
** in MATLAB.
** ------------------------------------------------------------------- */
snr = arm_snr_f32(&refOutput, &testOutput, TEST_LENGTH_SAMPLES);
if (snr < SNR_THRESHOLD_F32)
status = ARM_MATH_TEST_FAILURE;
status = ARM_MATH_SUCCESS;
Dan, since you indicated that your area isn't DSP, is it possible that Dr.Beckmann could answer this question:
In the CMSIS documentation example for SNR, the input signal AND the test signal is generated by Matlab. In the real world, how is a reference signal generated when receiving signals that have an amplitude = unknown? I am thinking that the answer isn't likely in one sentence. This is a whole side of DSP processing I have not done before, so if you can point me to some reading (website, document, book) that would provide some insight as to how to correctly use the CMSIS function I would be grateful.
To distinguish between signal and noise you need a good model of each. This is very common in other applications like communications in which you want to distinguish between 0's and 1's. It's usually not 100% reliable since noise can sometimes appear as signal and vice versa.