Hi to you all,I've a firmware running on a NXP LPCLink2 (LPC4370: 204 Mhz Cortex M4 MCU) board which basically does this:
My problem is that my code is too slow, and every now and then and overwrite occurs.
Using the DMA I'm saving the ADC data, which I get in Twos complement format (Offset binary is also available), in a uint32_t buffer and try to prepare them for the CMSIS DSP function by converting the buffer into float32_t: here's where the overwrite occurs. It's worth saying that I'm currently using Floating point Software, not hardware.
The CMSIS library also accepts fractional formats like q31_t, q15_t and so on, and since I don't strictly need floating point maths I could even use these formats if that could save me precious time.It feels like I'm missing something important about this step, that's no surprise since this is my first project on a complex MCU, any help/hint/advise would be highly appreciated and would help me in my thesis.
I'll leave here the link for the (more datailed) question I asked in the NXP forums, just in case: LPC4370: ADCHS, GPDMA and CMSIS DSP | NXP Community .
Thanks in advance!
First of all, I'll talk about what's (in theory) slow about your shift operations.
Normally, the compiler should figure this out by itself, but if you've turned off optimizing, it's not going to happen.
*pSrc = (*pSrc) << shiftBits; *pSrc = (*pSrc) >> shiftBits;
We don't need to go down to the assembly-level.
This is what happens in unoptimized code:
1: Read a value from memory
2: Shift a value by n positions
3: Store a value in memory
4: Read a value from memory
5: Shift a value by n positions
6: Store a value in memory
Reading a value from memory require 2 clock cycles.
Shifting the value to the left or right requires 1 clock cycle.
Storing the value requires 1 clock cycle.
If the code is not optimized by the compiler, then the code can be improved by removing step 3 and step 4.
As I can not see the full loop, I can't give a full suggestion on optimizing; except from what I've written earlier.
My earlier suggestion adapted to shifting:
void ...(...) { register int32_t i; /* index */ register uint32_t *d; /* destination */ d = &((int32_t *)pSrc)[i]; /* (destination seems to be the same as source) */ i = -length; /* convert length of array to a negative index */ do { d[i] = (d[i] << shiftCount) >> shiftCount; } while(++i); /* increment index and keep going until i wraps to 0 */ }
In other words: do not split up the shift in several 'stages', it might make an impact performance, as the code could grow.
If the code is build as I planned, it would result in something like this:
adds.n r0,r0,r1,lsl#2 /* point d to end of array */ rsbs.n r2,r1 /* index = -length */ loop: ldr.n r0,[r1,r2] /* [2] get 12-bit ADC value */ sbfx.w r0,r0,#0,#12 /* [1] sign-extend it to 32 bits */ str.n r0,[r1,r2] /* [1] store the sign-extended result */ adds.n r2,r2,#4 /* [1] increment index */ bne.n loop /* [1/1+P] go round loop until index wraps to 0 */
The numbers in square brackets are how many clock-cycles I expect the code to spend.
The last one has the format [branch not taken/branch taken], where P is 'prefetch'; P is a value between 1 and 3 (normally 1).
That means if your code runs from SRAM, then per sample, it should cost 7 clock cycles if P is 1.
We'll add an extra clock cycle, so we won't be disappointed .. Dividing 204MHz by 8 clock cycles allow us to process 25 million samples per second.
However! You also need to remember that the DSP needs to process the data.
In addition to using the above loop, I recommend changing the optimization level.
If I understand correctly, LPCXpresso is using gcc, and if that's the case, then it's easy for me to tell you how to change the optimization level:
In case you're able to run your gcc from the command-line, try this:
arm-none-eabi-gcc --help=optimizers
-It will give you a long list of optimization options, but the following is the important one: -O<number>.
Normally I use -Os (for size optimization), but that's not what you want in this case!
-Ofast is another way of saying you want fast code; here's the description: "Optimize for speed disregarding exact standards compliance".
From what I can see on NXP's web-site, you need to specify the setting inside the IDE; here's where they say it is:
Project -> Properties -> C/C++ Build -> Settings -> Tool Settings -> MCU C Compiler -> Optimization -> Optimization Level
I recommend first trying -O3
I usually write my own code in a way, so that even when optimization is disabled, the code is almost just as efficient.
The most important thing you can do is to get the optimization working; it should improve performance very much; especially if the compiler unrolls loops (unrolling means more operations per branch - or less branches per operation; have your pick).
Thanks for the detailed reply Jens.Right now I'm doing the sign-aligned stuff inside Thibaut's function (https://www.m4-unleashed.com/parallel-comparison/ ), which is called during the DMA's Transfer Completed ISR.here's my code:
uint32_t MAXmin; int16_t sample[NUM_SAMPLE] = {0}; int16_t sample2[NUM_SAMPLE] = {0}; uint16_t shiftBits = 4; uint16_t wordLenght = 8; /*Figured out looking at the registers address in while debugging*/ uint32_t SearchMinMax16_DSP(int16_t* pSrc, int32_t pSize) { uint32_t data, min, max; int16_t data16; /* max variable will hold two max : one on each 16-bits half * same thing for min */ /*Sign Extension*/ *pSrc = (*pSrc) << shiftBits; *pSrc = (*pSrc) >> shiftBits; *(pSrc + wordLenght) = (*(pSrc+wordLenght)) << shiftBits; *(pSrc + wordLenght) = (*(pSrc+wordLenght)) >> shiftBits; /* Load two first samples in one 32-bit access */ data = *__SIMD32(pSrc)++; /* Initialize Min and Max to these first samples */ min = data; max = data; /* decrement sample count */ pSize-=2; /* Loop as long as there remains at least two samples */ while (pSize > 1) { /*Sign Extension*/ *pSrc = (*pSrc) << shiftBits; *pSrc = (*pSrc) >> shiftBits; *(pSrc + wordLenght) = (*(pSrc+wordLenght)) << shiftBits; *(pSrc + wordLenght) = (*(pSrc+wordLenght)) >> shiftBits; /* Load next two samples in a single access */ data = *__SIMD32(pSrc)++; /* Parallel comparison of max and new samples */ (void)__SSUB16(max, data); /* Select max on each 16-bits half */ max = __SEL(max, data); /* Parallel comparison of new samples and min */ (void)__SSUB16(data, min); /* Select min on each 16-bits half */ min = __SEL(min, data); pSize-=2; } /* Now we have maximum on even samples on low halfword of max * and maximum on odd samples on high halfword */ /* look for max between halfwords 1 & 0 by comparing on low halfword */ (void)__SSUB16(max, max >> 16); /* Select max on low 16-bits */ max = __SEL(max, max >> 16); /* look for min between halfwords 1 & 0 by comparing on low halfword */ (void)__SSUB16(min >> 16, min); /* Select min on low 16-bits */ min = __SEL(min, min >> 16); /* Test if odd number of samples */ if (pSize > 0) { data16 = *pSrc; /* look for max between on low halfwords */ (void)__SSUB16(max, data16); /* Select max on low 16-bits */ max = __SEL(max, data16); /* look for min on low halfword */ (void)__SSUB16(data16, min); /* Select min on low 16-bits */ min = __SEL(min, data16); } /* Pack result : Min on Low halfword, Max on High halfword */ return __PKHBT(min, max, 16); /* PKHBT documentation */ }
At line 33 there's the bit extension.
Great analysis about the clock/sample Jens!
Sounds great! How can i be sure this is happening?
Speaking of the compilare, this is the out put of my actual configuration in lpcxpresso (S2D.c is the file containing the code we are talking about):
rm-none-eabi-gcc -nostdlib -L"/home/abet/LPCXpresso/link2_2/lpc_board_nxp_lpclink2_4370/Debug" -L"/home/abet/LPCXpresso/link2_2/lpc_chip_43xx/Debug" -L"/home/abet/LPCXpresso/link2_2/CMSIS_DSPLIB_CM4/lib" -Xlinker -Map="S2D.map" -Xlinker --gc-sections -Xlinker -print-memory-usage -mcpu=cortex-m4 -mthumb -T "S2D_Debug.ld" -o "S2D.axf" ./src/S2D.o ./src/cr_startup_lpc43xx.o ./src/crp.o ./src/sysinit.o -llpc_board_nxp_lpclink2_4370 -llpc_chip_43xx -lCMSIS_DSPLIB_CM4Memory region Used Size Region Size %age Used RamLoc128: 6688 B 128 KB 5.10% RamLoc72: 0 GB 72 KB 0.00% RamAHB32: 0 GB 32 KB 0.00% RamAHB16: 0 GB 16 KB 0.00% RamAHB_ETB16: 0 GB 16 KB 0.00% RamM0Sub16: 0 GB 16 KB 0.00% RamM0Sub2: 0 GB 2 KB 0.00% SPIFI: 13668 B 4 MB 0.33%
rm-none-eabi-gcc -nostdlib -L"/home/abet/LPCXpresso/link2_2/lpc_board_nxp_lpclink2_4370/Debug" -L"/home/abet/LPCXpresso/link2_2/lpc_chip_43xx/Debug" -L"/home/abet/LPCXpresso/link2_2/CMSIS_DSPLIB_CM4/lib" -Xlinker -Map="S2D.map" -Xlinker --gc-sections -Xlinker -print-memory-usage -mcpu=cortex-m4 -mthumb -T "S2D_Debug.ld" -o "S2D.axf" ./src/S2D.o ./src/cr_startup_lpc43xx.o ./src/crp.o ./src/sysinit.o -llpc_board_nxp_lpclink2_4370 -llpc_chip_43xx -lCMSIS_DSPLIB_CM4
Memory region Used Size Region Size %age Used
RamLoc128: 6688 B 128 KB 5.10%
RamLoc72: 0 GB 72 KB 0.00%
RamAHB32: 0 GB 32 KB 0.00%
RamAHB16: 0 GB 16 KB 0.00%
RamAHB_ETB16: 0 GB 16 KB 0.00%
RamM0Sub16: 0 GB 16 KB 0.00%
RamM0Sub2: 0 GB 2 KB 0.00%
SPIFI: 13668 B 4 MB 0.33%
Also,
arm-none-eabi-gcc --version
gives:
arm-none-eabi-gcc (GNU Tools for ARM Embedded Processors) 5.2.1 20151202 (release) [ARM/embedded-5-branch revision 231848]
Looking at the project properties as suggested by Jens I found out that I had no optimization level here:
So I'm going to turn this on and implements Inside Thibaut's function the sign-extension the Jen's way! And see if I get some good news!
Line 19, 20, 36 and 37 looks very wrong to me.
To me, it seems you're doing the same job twice.
Eg. after 8 iterations, the values you've already sign-extended, will be sign-extended again.
I could be wrong, but I better mention it; are you sure that they're doing what you want ?
(I would remove them completely)
About the 'prefetch' on branches (P):
Prefetch only happens when necessary. It's not really something you're in control of (especially not when using C code).
-But it may be 3 the first time the branch jumps back in the loop and then 1 from that point on.
If an interrupt happens while you're inside the loop, P might become 3 again.
But as you see, this is something that's rare, so I think you can assume the value 1.
RAM usage looks great. There's plenty for placing code in SRAM in a section that does not collide with the DMA.
As far as I can tell, the DMA buffer is somewhere in RamLoc128.
That means you can pick any of the other ram locations (I'd suggest one of the AHB sections) for the code.
Now, I just don't know which address RamLoc128 is.
(I particularly like that NXP measure 0 Bytes in GB).
About Optimization:
Line 19, 20, 36 and 37 looks very wrong to me. [...]are you sure that they're doing what you want ?
Unfortunately no, I'm not: the purpose of those lines is to point to the 2nd value of the pair that is being processed by Thibaut's function and sign-extend that value!I tried to figure out how much I needed to move my pointer to get the next-sample address by looking at the samples' address trough the debugger, maybe I was wrong?
so I think you can assume the value 1
That's ok for now, won't thinker with it.
Today I did some tests using the -O3 optimization level for my project and the result is great (using thibaut's function with no sign extension): the elapsed time for 128 sample is roughly 18us compared to the 160 without optimization!Fun fact: compiling the CMSIS DSP with -O2 gives slight better performance than using -O3! (updated the old post)
Speaking of the Memory Layout:
RAM usage looks great. There's plenty for placing code in SRAM in a section that does not collide with the DMA.As far as I can tell, the DMA buffer is somewhere in RamLoc128.
Yes, I completely agree: as I can see in LPCXpresso project's properties RamLock128 starts @ 0x10000000 (you can look for it in the picture posted my recap).
That means you can pick any of the other ram locations
Next step on this front I'll try to do: understand how I can do this.
It's great to hear about the optimization results.
abet wrote:Today I did some tests using the -O3 optimization level for my project and the result is great (using thibaut's function with no sign extension): the elapsed time for 128 sample is roughly 18us compared to the 160 without optimization!Fun fact: compiling the CMSIS DSP with -O2 gives slight better performance than using -O3! (updated the old post)
abet wrote:
Today I did some tests using the -O3 optimization level for my project and the result is great (using thibaut's function with no sign extension): the elapsed time for 128 sample is roughly 18us compared to the 160 without optimization!
Fun fact: compiling the CMSIS DSP with -O2 gives slight better performance than using -O3! (updated the old post)
The -O2 is a great observation. This might be connected to that -O3 most likely unrolls the loops more than -O2.
If that's the case, it means that fetching the code from SPIFI slows down (I'm only guessing here).
If it's possible for you to link to a binary version of a pre-compiled CMSIS DSP library, try that.
I know that the people who have developed the DSP library have spent very much time on optimizing it; like that was the most important thing in thw World for them,.
-So if a precompiled library exists and you can link directly to that, then you'll most likely get the best performance regarding the DSP library.
Regarding the sign extension, there is a very simple way to do it : change the scale of your data !
If I understood properly, your sample buffer holds 16-bits values in which 12 lowest significant bits are the ADC output value in 2-s complement and I expect you have 4x 0-bit in front (bits 15-12).
I would symbolize this sample pair like that : sample n (0x0SA1), sample n+1 (0x0SA2)
When you use *__SIMD32(pSrc), it loads a register with both samples (0x0SA20SA1), then you just need to shift left by 4 bits to have 0xSA20SA10 which is a pair of signed 16-bits values !
I you need to keep your samples for further computations, you can write back to memory with this new scale.
This would give something like :
uint32_t SearchMinMax16_DSP(int16_t* pSrc, int32_t pSize) { uint32_t data, min, max; int16_t data16; /* max variable will hold two max : one on each 16-bits half * same thing for min */ /* Load two first samples in one 32-bit access */ data = *__SIMD32(pSrc); /* put significant bits on bits 15-4 instead of 11-0 on each halfword */ data <<= 4; /* Write back to memory to have useable 16-bits samples, increment source pointer by a pair of samples */ *__SIMD32(pSrc)++ = data; /* Initialize Min and Max to these first samples */ min = data; max = data; /* decrement sample count */ pSize-=2; /* Loop as long as there remains at least two samples */ while (pSize > 1) { /* Load next two samples in a single access */ data = *__SIMD32(pSrc); /* put significant bits on bits 15-4 instead of 11-0 on each halfword */ data <<= 4; /* Write back to memory to have useable 16-bits samples, increment source pointer by a pair of samples */ *__SIMD32(pSrc)++ = data; /* Parallel comparison of max and new samples */ (void)__SSUB16(max, data); /* Select max on each 16-bits half */ max = __SEL(max, data); /* Parallel comparison of new samples and min */ (void)__SSUB16(data, min); /* Select min on each 16-bits half */ min = __SEL(min, data); pSize-=2; } /* Now we have maximum on even samples on low halfword of max * and maximum on odd samples on high halfword */ /* look for max between halfwords 1 & 0 by comparing on low halfword */ (void)__SSUB16(max, max >> 16); /* Select max on low 16-bits */ max = __SEL(max, max >> 16); /* look for min between halfwords 1 & 0 by comparing on low halfword */ (void)__SSUB16(min >> 16, min); /* Select min on low 16-bits */ min = __SEL(min, min >> 16); /* Test if odd number of samples */ if (pSize > 0) { data16 = *pSrc; /* put significant bits on bits 15-4 instead of 11-0 on low halfword */ data16 <<= 4; /* Write back to memory to have useable 16-bits sample */ *pSrc = data16; /* look for max between on low halfwords */ (void)__SSUB16(max, data16); /* Select max on low 16-bits */ max = __SEL(max, data16); /* look for min on low halfword */ (void)__SSUB16(data16, min); /* Select min on low 16-bits */ min = __SEL(min, data16); } /* Pack result : Min on Low halfword, Max on High halfword */ return __PKHBT(min, max, 16); /* PKHBT documentation */ }
With proper optimization options, I expect this to be quite efficient.
Yes, this is quite efficient!
It might be possible to gain 2 extra clock cycles per iteration by further unrolling, eg. processing four 16-bit samples at a time.
-It requires reading the vaiues contiguously.
Eg. if the two load instructions are next to eachother, then a clock cycle will be saved.
Another clock cycle is saved on the branch, we're saving.
Explained in detail; this sequence will save 2 clock-cycles:
load : load: process : process : store : store : branch
This sequence will only save one clock-cycle:
load : process : store : load : process : store : branch
(the end of the while loop represents the branch)
Each time the unrolling doubles, 2 clock cycles are saved until there are not enough free registers.
If we keep the DMA buffer's size divisible by 16 or a higher power of two, we do not need the 'cleanup' for the remaining values.
That would make the code a little simpler and easier to maintain.
You're right. Now that you have a efficient computation technique, you still can improve the overall efficiency.
Usually, I try to let compiler do his job where he's good !
In fact, you need to wonder what you can do to help him generate efficient code:
I made quite a detailed about this analysis on my blog (Simplest algorithm ever).
In the end :
- try to fix everything you can at compile time (bit shift count, buffer size, loop count ...)
- limit code visibility to what's necessary (using static functions will allow inlining optimizations inside a module), same for variables, do not use module variables (placed in RAM) when only local variables can be used
As demonstrated in my post, this will let you write safe code and allow compiler to get rid of unused parts !
All of this is only true when you need to reach best efficiency and can afford to turn compiler optimizations ON and very high !!
RAM usage looks great. There's plenty for placing code in SRAM in a section that does not collide with the DMA.As far as I can tell, the DMA buffer is somewhere in RamLoc128.That means you can pick any of the other ram locations (I'd suggest one of the AHB sections) for the code.Now, I just don't know which address RamLoc128 is.
Jens, if code will be run in SRAM the location should be at 0x10000000, the start of RamLoc128. It should be the data space and DMA buffer that must be relocated. This is because RamLoc128 (starting from 0x10000000) is the area where the bootloader copies and executes the image from SPIFI (and other external source) when not executing in place.
From your post above:
I needed to add the SPIFI Flash in order to use the Link2 as evaluation board (as described here: Introduction to Programming the NXP LPC4370 MCU Using the LPCxpresso Tools and Using Two LPC-Link2 Boards and here: Using an LPC-Link2 as an LPC4370 evaluation board | NXP Community
Using the LPC-Link2 with SPIFI Flash as the boot source is described in those two pages. What Jens is recommending is to execute from SRAM for the code to run faster. This means that you will add SPIFI Flash but program execution should not be directly from that location. The code from Flash should be copied to SRAM and executed there.
From my reply to Jens, the code should be run from 0x10000000, the start of RamLoc128. It should be the data space and DMA buffer that must be relocated.
Before Andrea posted what he is doing with the samples, this is not an advisable trick. Now that finding the minimum and maximum values seems the only task to be done, left-shift/change of scale is a simple but effective way of sign extension.
For this project, writing a custom function for searching the minimum and maximum values rather than using the CMSIS functions is more advantageous. This is because the input samples need to be sign-extended first and the search for minimum and maximum values can be combined in a single function.
Many thanks goodwin for the insight on memory map. I'll try to combine your answers and jensbauer ones.As you suggest now I'm going to close this topic tomorrow (as soon as I can access the board and the thibaut's code) and see if open a new one on memory map/speed optimization as soon as I have more, detailed, infos.I'm sorry that I was unavailable in the last few days.
Thank you Thibaut for your detailed answers.I studied and evaluated your code today and I got good results: it took roughly 22.5us for 128 32bit-data, so, correct me if I'm wrong, actually 256 samples! That's the same time it took with the previous implementation and without the bit shifting.
Usually, I try to let compiler do his job where he's good !In fact, you need to wonder what you can do to help him generate efficient code:I made quite a detailed about this analysis on my blog (Simplest algorithm ever).In the end : - try to fix everything you can at compile time (bit shift count, buffer size, loop count ...) - limit code visibility to what's necessary (using static functions will allow inlining optimizations inside a module), same for variables, do not use module variables (placed in RAM) when only local variables can be used
Thanks for this hints, I read the article you linked and now I think I better understand this new (to me of course) way to program your are showing: I feel like I need to study *a lot*. I just wonder how I can get those nice compiler outputs in GCC/LPCXpresso (which is actually a forked version of Eclipse).
Thanks again for your help! Now I'm going to close this post, but it was nice and helpful. Unfortunately I can choose just one correct answer, but I'd like to thank you all (once again) for what you are doing here. Lovely community.