We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
C51 returns the following error:
*** ERROR C249: 'DATA': SEGMENT TOO LARGE
Rather than switching to the COMPACT or LARGE memory model I would like to manually declare some variables into xdata. I would however like to know what the compiler is actually putting in the data segment and what is taking up the most space. Is there some way I can get this information in the absence of the .MAP file?
Thanks,
Stijn
Figure out how large your DATA segment actually is, and then review the source for the obviously large data structures, and what should be "static const" if you aren't actually changing any of it's content.
Make the DATA segment temporarily bigger so the linker completes and provides a .MAP, see if the linker has any command line options to provide verbose, or pass related output.
I would like to manually declare some variables into xdata. you should also consider IDATA. anyhow, generally, structures and arrays go in XDATA, pointers to them in data
Make the DATA segment temporarily bigger is that even possible (architecture defined) if so how?
Is there some way I can get this information in the absence of the .MAP file?
Somewhat obviously: no. The map file is precisely where you're supposed to get that information.
One aspect in which compiler toolchains differ from each other is what their linkers do to their output files in case of a linking failure. Some will leave them alone (so you would still have the last workin version for reference), others will overwrite at least the (primary) map file with a new version documenting the failed attempt, usually including the error message(s) that cause the failure.
Both behaviours make sense, and from a quick experiment, it appears BL51 exhibits the former, but LX51 the latter. Go figure ;-)
Make the DATA segment temporarily bigger - is that even possible (architecture defined) if so how?
Depends what "too large" means in this context. If it's smaller than architecturally limited then make it bigger, if it's already at the limit then try redirecting 'data' into larger segments that can accommodate it. It was a general suggestion about how to look at the problem if the .MAP was the only way to consider how the compiler/linker were behaving.
You're never going to get 2 gallons of crap in a 1 gallon bucket, so if it's architecturally defined to have less capacity than the source needs, someone writing the source needs to actually think about that when coding it.
What's the floor plan for the code/data on the platform you took the code from?
If you did that, then everything that was defaulting to DATA would default to XDATA - so then you would see it...
About the linker: it doesn't come into play. The compiler (C51) returns the error and only generates an .LST file, so I have nothing to run the linker on.
I disagree with the premise: "Somewhat obviously: no. The map file is precisely where you're supposed to get that information.". The compiler obviously knows the data segment is too large. There is no fundamental reason why it should not be able to tell me what it puts in the data segment. Even just reporting the size of the segment would be very helpful.
Using the large memory model and trying to guess what the compiler would have put in the data segment for the small memory model is somewhat helpful but not very convenient. However, maybe the COMPACT memory model would make the same split...
And concerning the somewhat insulting statement "You're never going to get 2 gallons of crap in a 1 gallon bucket.": I'm in a situation where I was asked to make other peoples (non-crappy) code work with a different compiler. Having some more compiler feedback would help.
The compiler obviously knows the data segment is too large
I don't have the C51 tools loaded any more so can't try this out, but as a suggestion you might want to try getting the compiler to produce an assembler source file and then look at that.
And what was the code foot print on that compiler/platform, which I'm going to guess isn't an 8051
Does your code have large local/automatic stack allocations?
I have yet to see the compiler return "data segment too large", but quite often seen the linkr do it
Well given the error number/format it certainly looks to be a compiler generated error, so one might assume it's trying unsuccessfully to juggle some resources to make the C work. The Keil description of this error lacks any real useful insight to it's source.
Break the source into smaller pieces, observe if the error points at a particular line, or can be otherwise isolated in the .LST or bisection.
Given the special needs of the 8051 processor, it's quite easy for the compiler to spot a single object file that alone adds more data than what may fit. Why wait for the linker to confirm? It's even likely that Keil has one-byte offset values in the object file format and already there knows that they have no free offset values to assign to the pre-linked data object offsets.
When porting code from other compilers, it's meaningful to locate arrays and larger structs and tag them for XDATA storage and leave DATA for smaller variables.
... and trying to guess (sic) what the compiler would have put in the data segment for the small memory model
There's no guessing about it!
When you change from Small to Large, exactly what went into DATA will then be in XDATA