We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hi
I am receiving:
*** ERROR L121: IMPROPER FIXUP
MODULE: C:\KEIL\C51\LIB\C51C.LIB (PRINTF)
SEGMENT: ?PR?PRINTF?PRINTF
OFFSET: 0068H
I have tryied to read threads about this, but cannot understand it since it appears in a .LIB file
Any help? Really needing it. Thanks for your time Dario
How comes that deleting code (so deleting variables) causes that now the space is insufficient? do you have any linker warnings "... L16 uncalled segment ..." ?
maybe Large model is ok No, it is not OK, go for 'small' if you decide to change.
Erik
Plenty, arround 75 warnings.
Best practice is to solve the errors/warnings in the order they appear.
Is the fixup error the first error that you are getting from the linker?
You really should start fixing all the warnings.
"Best practice is to solve the errors/warnings in the order they appear."
Absolutely - one warning very often leads to another!
Thus it's often best to address them one at a time and in order
maybe Large model is ok
No, it is not OK, go for 'small' if you decide to change.
Please explain, if you can, why you think it is not ok for the OP to change from the compact model to the large model.
the large model is slower than a snail and is a glutton for codespace.
PS I respond to this from you simply because you screwed up and made a reasonable request. Do not expect me to pick up responding to your usual crap.
"the large model is slower than a snail ..."
But the OP specifically stated that he has, "no time demanding issues".
If a slow snail is fast enough, then there's no reason not to use it!
"...and is a glutton for codespace"
Again, if codespace is not at a premium, that's not a problem.
With a large application, where most data is going to have to be in XDATA anyhow, I can't see why using the Large model would be a big issue?
As you've just said yourself elsewhere, you're happy to take the codespace hit and turn off the optimiser to give debuggable code - so I don't see why it's evil to take these hits and use the Large model where it makes sense...
the problem I have with using the large model is that it is far easier to correctly assign xdata to variables where "slow is allowed" that to use the large model and 'catch' all cases where DATA should be used.
I, personally, have a problem with the attitude "this is not critical, so let us not worry about it" since that usually cones back and bites you in a large muscle.
you're happy to take the codespace hit and turn off the optimiser to give debuggable code YES, there is the advantage of "debuggable code", the only 'advantage' of the LARGE model is that it allows you to be lazy.
Please do not make that I am not using the optimizer, into that I am not interested in optimal code. I do everything reasonable to get the fastest, smallest result withot using the optimizer.
"the problem I have with using the large model is that it is far easier to correctly assign xdata to variables where "slow is allowed" that to use the large model and 'catch' all cases where DATA should be used."
I see; but I was specifically talking about large applications where most data is going to have to be in XDATA anyhow - in which case littering 98% of all definitions with an 'xdata' qualifier is pointless and just adds unhelpful "noise" to the source.
In such cases, "slow is allowed" is the rule and "fast required" is the exception - so only those should be specially qualified as DATA (or IDATA or whatever)
"Please do not make that I am not using the optimizer, into that I am not interested in optimal code."
Never intended to do that: just pointing out that getting the utmost smallest code size is not "optimum" in all cases - other requirements may take precedence...
I was specifically talking about large applications where most data is going to have to be in XDATA anyhow - in which case littering 98% of all definitions with an 'xdata' qualifier is pointless and just adds unhelpful "noise" to the source. Ok, first "there is no rule without exceptions" second: I can not think of any application that is reasonably well designed where (I)DATA can not handle all/most 'single' variables so the number of declarations of xdata will be realtively few. It takes ONE 'xdata' to declare a buffer of 5000 bytes.
If, however, you go with the LARGE model almost every 'single' variable should have (i)data attached to it i.e. more 'noise'.
Again, if someone does not give a hoot, then all discussion of this is moot, I have seen "I use large so I do not have to worry about space" which I read as "I use large because I am a lazy dog"
I have seen "I use large so I do not have to worry about space" which I read as "I use large because I am a lazy dog"
Please explain how taking advantage of the available resources makes one a 'lazy dog'.
you would ask that would you not? feel hit?
"Please explain how taking advantage of the available resources makes one a 'lazy dog'"
Lazy is: "I can't be bothered to think about which Memory Model to use, so I'll just go for 'Large' and not bother about it"
As opposed to considering the pros & cons of each model, and choosing the most appropriate one.
1 - Take a project. 2 - Analyse all of the available information relating to the CPU, memory, speed, compiler, resources etc 3 - Determine the model most appropriate for the situation 4 - Use it
Now, after going through this procedure, the team might decode that the large model is the most suitable.
Assuming that there is sufficient memory and processing power then, as a consequence of using the large model it allows data to be allocated more freely.
As far as I am concerned, this situation does not exhibit laziness but rather a careful, balanced design decision.
As usual you are unable to justify your mindless opinions.
Please, when you feel the need to post rubbish could you qualify it with "in my opinion" or something similar to ensure that you don't mislead the inexperienced?