Hi, I'm doing a comparison between BL51 and Lx51 to determine the code space savings- I'm a bit suprised to see that for the same project, Lx51 yields a code reduction of only 0.6%- I've enabled the AJMP/ACALL setting, and altered the optimisation levels (speaking of which, under Lx51 raising the optimsation levels from 9 to the 'new' levels of 10/11 actually increases the code size) Does anyone have any similar experience of this phenomena- or any suggestion as to what else I should try to get the touted 10-15% reductions available using LX51? I'm only using the 'BANKAREA' LX51 controls... Thanks David
I came to this discussion kind of late. I was looking to understand why level 10 and 11 seem to make my gnerated .BINs larger. My projects are pretty big, getting just up to the 64K limit of teh classic 8051 architecture. I have very carefully followed all optimization instructions. However, you said that I must have done somethign wrong if I do not see a 5%. Please hlep me find what I did wrong so I can get this benefit. I really need it in some of the bigger varients of my project.
"I was looking to understand why level 10 and 11 seem to make my gnerated .BINs larger" If you are able to select levels 10 & 11, I think you must have a product that is entitled to support (unless your support period has expired). In that case, you might be better off contacting Keil support direct. I seem to remember from a year or so back that there were some options that could "stick" when increasing the optimisation level; ie, when you thought you'd raised the whole project to level 10 or 11, there could be a few files left still with contradictory options. You can check the listing files for this.
"I was looking to understand why level 10 and 11 seem to make my gnerated .BINs larger." The general conclusion to the previous discussions of levels 10 and 11 was that they do in most (all?) cases give larger code than level 9. Keil advertised these extra optimisation levels for quite a long time before they appeared, so I guess they had to release them even though they didn't work. "However, you said that I must have done somethign wrong if I do not see a 5%" I think the 5% improvement comes from the switch from BL51 to LX51 with code packing enabled, but see the note from Jon Ward earlier in this thread. You don't need to select levels 10 or 11 for this to work.
The big code reduction in the LX51 is gained by the Linker Code Packing. This is where you can reduce the code by 5-10% in average. The Optimization Levels 10&11 are "maybe-optimizations". They can reduce code in most cases, but in others the code may even grow. Personally I think its better to have another chance for further code size reduction, even if I have test the result on an experimental basis, then don't have it.
"They can reduce code in most cases" Most cases?
I have found that levels 10 and 11 NEVER reduce code size. Aside from that they make the listing nearly impossible to follow since the only valid listing is from the linker. I generally develop with OT(8) then switch to OT(9) when the product is getting stable. I can still follow the generated code in the compiler list files and know that the only changes the linker is making is optimizing the branch instructions.
Aside from that they make the listing nearly impossible to follow since the only valid listing is from the linker. For that reason, those of us that believe in the NASA principle rarely optimize beyond 2, because debugging get difficult to impossible when the ICE can not correlate. Erik The NASA principle: we fly what we test.
"I have found that levels 10 and 11 NEVER reduce code size" Me too, on a fair selction of different projects.
I generally develop with OT(8) then switch to OT(9) when the product is getting stable. Hello Bob, if you switch your stable product compiled with OT(8) to a new version compiled with OT(9) applying on all modules: In some cases your product will get instable! Why? If the compiler works with OT(9) "Common block subroutines", it creates new internal subroutines wich will be executed by CALL's (not in every cases, sometimes C51 optimized it again and replaces CALL's with JMP's). Remember the side effect: Every additional CALL increases the Stackpointer SP by two Bytes! Normally, you have also interrupts in your code. Assuming there are two interrupt functions with different priority levels: If C51 creates in these functions common block subroutines, therefore your demand on Stack space increases 2 Bytes PER interrupt function again (additionally to the two Bytes mentioned above!). The same way is, if an common block subroutine in the main (aka "Startup code") calls further subroutines, which itself consist commom block subroutines: For your demand on Stack you must calculate 2 additional Bytes per calling level!!! So, if your stable product compiled with OT(8) has a free reserve of 4 Bytes on Stack, after compiling with OT(9) your code size will be probably smaller, BUT: Your stack memory will be probably too small, depending on your software structure! I hope explained clearly the "traps and pitfalls" if you switch the OT(8) to OT(9) at a stable product without testing exessively again. You have to analyse the *.LST files generated by C51 to estimate your actual increasing stack demand. Best regards Martin Macher (sorry if my english is not prefectly ;-)
Hello Bob, if you switch your stable product compiled with OT(8) to a new version compiled with OT(9) applying on all modules: In some cases your product will get unstable! .... Martin Martin, were you involved in setting the NASA policy "we fly what we test"? Bob, you must be relying on so called "testing", that is not safe, succesful testing does not prove the abscence of bugs, it only proves the abscence of known bugs. The case Martin discuss (stack shortage) is the very one so called "testing" never catch because the stack overflow typically happen with an extremely rare confluence of events. 98.17% of the cases of the "once a month hiccup" in a microcontroller system are due to the missed "extreme confluence of events" As an example: one case where a hiccup happened very rarely, for it to happen, a low priority interrupt had to happen within 12 rarely accessed instructions in the main code and then during exceution be interrupted by a high priority int. Erik
"Martin, were you involved in setting the NASA policy "we fly what we test"?" "Bob, you must be relying on so called "testing", that is not safe" Bit of a contradiction there, no? On one hand you're saying that you should use what you've tested, and on the other you're saying that testing is no good. Note that Bob did not say that he tested his code at 8, recompiled it at 9 and shipped it, he said that he did the initial development at 8 then switched to 9 when the code was becoming stable. There is a big difference.
I misstated my response ... we use OT(8) without linker code packing. If we need further savings, we then switch to using code packing. While I agree that this could cause errors, we always validate. Our problem is that our customers tend to add features until all available resources are consumed. OT(9) is only used as a last resort since it obscures the code and listings making it much more difficult to follow.
"Bit of a contradiction there, no? On one hand you're saying that you should use what you've tested, and on the other you're saying that testing is no good." Not really. Erik is saying that testing is imperfect. If you test something and then change the optimisation level, you invalidate the previous testing - so you go from something incompletely proven to effectively un-proven. That is definitely a retrograde step!
Bit of a contradiction there, no? On one hand you're saying that you should use what you've tested, and on the other you're saying that testing is no good. Nope, I say nothing about testing and a lot about how the relying on testing can lead to embarassment. A properly designed program will, since we all are human, have "clerical errors" and testing will identify those for correction. No testing can, however, verify the design. "testing" for stack overflow may and may not be valid, how are you going to test for maximum interrupt confluence at deepest place in the main?. Such has to be verified by design since it can not be verified by testing. Now, elaborating on this, when you let some "optimizer" increase subroutine nesting at will, where is the "design calculation of stack depth against worst case confluence", you can only perform a worthless test. Of course there is the theoritical case where you need to optimize max and have plenty of bytes for the stack. That, however is totally atypical the combination of too much code and very few variables is not a likely one. Erik
"Now, elaborating on this, when you let some "optimizer" increase subroutine nesting at will, where is the "design calculation of stack depth against worst case confluence", you can only perform a worthless test." Ok, so what you're really saying is that you shouldn't use high levels of optimisation because you can't guarantee the 'untestable' aspects of the program will be ok by using good design practices?
Ok, so what you're really saying is ... Well, kind of. As my footnote stated, in the unlikely case where you need high level optim and have very few variables - maybe. However, generally speaking - yes. Erik