Noticed particularly the stack usage difference of a function _printf_f between ARM Compiler 6 (tested with 6.11) and ARMCC 5.06 (update 6) is a whopping 6x more on the AC6 (1968 bytes on AC6 vs 320 bytes on AC5), as reported by the linker static analysis.
Any reason why AC6 uses more stack in this case?
Appreciate any thoughts on this.
ST
Hi Milorad,
I think the differences are only significant on certain (maybe C) library functions, which so far I have found particularly on the floating point formatting printf functions...the rest of my functions showed more or less the same stack usages as well.
Please have a look if your project links _printf_f (and the stack usage can be explicitly shown in the callgraph) and only then we could be on the same page :)
Hi Seng,
I used:
printf("Test %f", 1./3);
BTW, I would suggest you try from Blinky project and make the experimentation, or that you reduce your project to simple code that you can figure out why you have such difference.
You can try what I did use a Blinky with line I mentioned and see if you get same result with both compiler and go from there.
Best regards, Milorad
Yes you are right, I tried with Blinky and the two compilers produced the same analysis results...hmm then it gets interesting.
Let's see what I can find out.
Thanks!
Let us know what you find out Seng.
Just had the time to go back to this subject.
I got rid of all my floating point string formatting in sprintf and snprintf while replacing it with function below:
static void printf_float (const char * s, uint8_t decimal, float value) { char *tmpSign = (value < 0) ? "-" : " "; float tmpVal = (value < 0) ? -value : value; int tmpInt1 = tmpVal; // typecast float tmpFrac = tmpVal - tmpInt1; // Get fraction //determine number of digits used by tmpInt1 uint8_t sigDigits = fminf(6,trunc(log10f(fmaxf(1,fabsf(tmpInt1))))+1); int tempGain = powf(10,decimal); // //limit decimal decimal = fminf(decimal,fmaxf(0,5-sigDigits)); int tmpInt2 = trunc(tmpFrac * tempGain); // Turn into integer. //Print to string if (decimal>0) snprintf(lcd_str_large,STRBUFFSIZE,"%s%s%-*d.%-*d",s,tmpSign,sigDigits,tmpInt1,decimal,tmpInt2); else //rpint significant numbers only snprintf(lcd_str_large,STRBUFFSIZE,"%s%s%-5d",s,tmpSign,tmpInt1); }
_printf_f wasn't linked in the the final output and stack usage was reduced significantly by slashing almost 2000 bytes to the AC5 level!
Though I am still not sure whats really the difference but definitely something in the library.
Regards, Seng Tak
"I got rid of all my floating point string formatting in sprintf and snprintf"
That's your answer right there!
OK.
What I am not getting is, I did the same (by using floating point formatting) in AC5 but stack usage was 2000 bytes lower.
Both sprintf are ARMABI calls.
What I did was merely recompiling the same sources after changing the compiler, and the increment was noticed.
So why is there such a difference between these two libraries? Just being curious...