Hi, I'm doing a comparison between BL51 and Lx51 to determine the code space savings- I'm a bit suprised to see that for the same project, Lx51 yields a code reduction of only 0.6%- I've enabled the AJMP/ACALL setting, and altered the optimisation levels (speaking of which, under Lx51 raising the optimsation levels from 9 to the 'new' levels of 10/11 actually increases the code size) Does anyone have any similar experience of this phenomena- or any suggestion as to what else I should try to get the touted 10-15% reductions available using LX51? I'm only using the 'BANKAREA' LX51 controls... Thanks David
"Martin, were you involved in setting the NASA policy "we fly what we test"?" "Bob, you must be relying on so called "testing", that is not safe" Bit of a contradiction there, no? On one hand you're saying that you should use what you've tested, and on the other you're saying that testing is no good. Note that Bob did not say that he tested his code at 8, recompiled it at 9 and shipped it, he said that he did the initial development at 8 then switched to 9 when the code was becoming stable. There is a big difference.
"Bit of a contradiction there, no? On one hand you're saying that you should use what you've tested, and on the other you're saying that testing is no good." Not really. Erik is saying that testing is imperfect. If you test something and then change the optimisation level, you invalidate the previous testing - so you go from something incompletely proven to effectively un-proven. That is definitely a retrograde step!
Bit of a contradiction there, no? On one hand you're saying that you should use what you've tested, and on the other you're saying that testing is no good. Nope, I say nothing about testing and a lot about how the relying on testing can lead to embarassment. A properly designed program will, since we all are human, have "clerical errors" and testing will identify those for correction. No testing can, however, verify the design. "testing" for stack overflow may and may not be valid, how are you going to test for maximum interrupt confluence at deepest place in the main?. Such has to be verified by design since it can not be verified by testing. Now, elaborating on this, when you let some "optimizer" increase subroutine nesting at will, where is the "design calculation of stack depth against worst case confluence", you can only perform a worthless test. Of course there is the theoritical case where you need to optimize max and have plenty of bytes for the stack. That, however is totally atypical the combination of too much code and very few variables is not a likely one. Erik
"Now, elaborating on this, when you let some "optimizer" increase subroutine nesting at will, where is the "design calculation of stack depth against worst case confluence", you can only perform a worthless test." Ok, so what you're really saying is that you shouldn't use high levels of optimisation because you can't guarantee the 'untestable' aspects of the program will be ok by using good design practices?
Ok, so what you're really saying is ... Well, kind of. As my footnote stated, in the unlikely case where you need high level optim and have very few variables - maybe. However, generally speaking - yes. Erik