This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

8052 data memory 128bytes or 256 byte?

Hello,
i am using 8052 in my project and according to datasheet the data memory is 256 bytes but in my code if i exceed 128 bytes the code is not compiled

do i need to use it as x data memory?

please help

Parents
  • "I dont understand your translation, can you elaborate a bit?"

    Although using 'C' relieves you of having to think about a lot of the detail that assembler programming requires, it does not (and cannot) relieve you of all thinking.

    The compiler can do all the "mechanical" things, but it can't do the things that require real intelligence and original thought - you still have to do that yourself.

    Deciding which variables to put into DATA and which into IDATA is one such thing.

Reply
  • "I dont understand your translation, can you elaborate a bit?"

    Although using 'C' relieves you of having to think about a lot of the detail that assembler programming requires, it does not (and cannot) relieve you of all thinking.

    The compiler can do all the "mechanical" things, but it can't do the things that require real intelligence and original thought - you still have to do that yourself.

    Deciding which variables to put into DATA and which into IDATA is one such thing.

Children
  • Using your real/artificial intelligence and original thoughts answer this.

    Why do have and option for memory model small, medium or Large ?

    Whatever is your intelligent answer !

    Using your "original thoughts" pls think about data and idata.

    If we need real/artificial intelligence and original thoughts to use idata like

    unsigned char idata variable;

    why cannot we extend our real/artificial intelligence and original thoughts for xdata or pdata

    like

    unsigned char pdata variable;
    unsigned char xdata variable;

    I think only people with real/artificial intelligence and original thoughts can answer this stupid question

    may be there should be two versions of compiler

    one with real/artificial intelligence, with all such nonsense options, for dumb people
    without original thoughts, like me.

    one with no intelligence, with no such nonsense options, for people with high real/artificial intelligence and original thoughts, Like........

    I apologize troubling your real/artificial intelligence and original thoughts.

  • one with real/artificial intelligence, with all such nonsense options, for dumb people

    Careful what you wish for. At the point where you just tell the compiler what you want the program to do, software development jobs will have moved to the minimum wage sector.

  • Be carefull!

    This can be answered and understood by the peopel with Real/artificial intelligence and original thoughts like.......

  • How about answering the original question:

    How do you propose that the compiler should decide which variables to put into DATA and which into IDATA??

  • Why do have and option for memory model small, medium or Large ?
    because the '51 ain't no PC

    why cannot we extend our real/artificial intelligence and original thoughts for xdata or pdata
    because the '51 ain't no PC

    may be there should be two versions of compiler
    impossible because the '51 ain't no PC

    If you want to program "like for a PC" program a friggin PC, do not try to press your PC philosophy on the '51 because the '51 ain't no PC

    Erik

    There is a school of thought states tha "C is C there is no need to know the underlying architecture"

    Ok there also are things like horsefeathers

  • kapil singhi wondered:
    "Why do have and option for memory model small, medium or Large ?"

    Erik replied:
    "because the '51 ain't no PC"

    Actually, PC Development tools do have a similar "memory model" concept!

    Anyhow, I think the OP's point was not "why do we have memory models at all?" but rather "why do we have only those 3 memory models" and, in particular, why isn't there a memory model that uses IDATA?

    As I've alreay noted, only one of the four "big" commercial compilers seems to provide an additional IDATA memory model - the other three (which includes Keil) all have just Small (DATA), Medium (PDATA), and Large (XDATA).

    Must be something in that...?

  • As I've alreay noted, only one of the four "big" commercial compilers seems to provide an additional IDATA memory model - the other three (which includes Keil) all have just Small (DATA), Medium (PDATA), and Large (XDATA).

    an example I have not seen ANY "power user" in this forum that does not deride malloc() and the associated when used on the '51, I am sure RK is included although not officially. Nevertheless the Keil '51 tools do support it.

    This lead me to the following conclusion:
    to claim adherence and to pacify (sell to) the "C is C" users the '51 tool makers have included features they know are ridiculous when applied to the '51 architecture.

    The "IDATA memory model" probably is the result of a complaint from someone like the op in this thread complaining "why do I have to install external memory when I have more than 128 and less than 256 variables, I can, of course, not be bothered with qualifying storage"

    Erik

  • "an example I have not seen ANY "power user" in this forum that does not deride malloc() and the associated when used on the '51,"

    I use malloc occasionally depending on the application. What's you're problem with it?

  • Because the size of RAM in an embedded system is almost always fixed & known at compile time (especially on systems suited to an 8051), there is seldom any point in dynamic allocation.
    But it always adds an overhead - and the question of what to do if a malloc call fails...!

  • what to do if a malloc call fails

    This is usually the biggest issue for my projects. I typically have to guarantee that the system supports M widgets and N thingies. And the system can't just not work, or throw up a dialog box and reboot. You pretty much have to guarantee before turning on the power that enough memory is available to do the worst-case job.

    The biggest exception is when the design has some acceptable tradeoffs. Maybe it can accomodate up to M widgets max, and N thingies max, but you can configure it to have m < M widgets and n < N thingies in some combination. In this case, you might consider the partitioning to be "dynamic" in that it's not pre-allocated static memory. But the code still wouldn't likely be creating individual widgets and thingies on the fly. It would just allocate the max size tables according to the configuration.

    It's fairly common for me to dynamically allocate an entry out of one of those tables. But that's not a call to malloc(), but rather some other routine that tracks usage of just that table. You can argue that using malloc() results in less work and less code than writing a bunch of individual allocators. On the other hand, a general-purpose memory allocator is almost always much slower and less efficient than special-purpose ones that can allocate fixed block sizes, and particularly known ones. And rarely are there many different types of objects (records, message buffers, whatever) that need to be allocated.

  • "Because the size of RAM in an embedded system is almost always fixed & known at compile time (especially on systems suited to an 8051), there is seldom any point in dynamic allocation."

    Picture the scenario where you need a chunk of memory that doesn't need static storage duration but requires a lifetime greater than that of automatic data within a function. In this situation dynamic allocation allows more efficient use of memory than can be achieved with data overlaying.

    "But it always adds an overhead"

    Sure - but that's only a concern to some applications.

    " - and the question of what to do if a malloc call fails...!"

    As you pointed out above the memory available is generally known. Why would you try and allocate more memory than exists?

  • Picture the scenario where you need a chunk of memory that doesn't need static storage duration but requires a lifetime greater than that of automatic data within a function. In this situation dynamic allocation allows more efficient use of memory than can be achieved with data overlaying.
    Not necessarily. If you need that memory at any time you WILL need it free (unused) when you do not need it.

    As you pointed out above the memory available is generally known. Why would you try and allocate more memory than exists?
    apples and oranges this should read
    "As you pointed out above the total memory available is generally known. Why would you try and allocate more unused memory than exists"

    The amount of unused memory is not easily "known"

    Erik

  • "Not necessarily. If you need that memory at any time you WILL need it free (unused) when you do not need it."

    That is exactly the point of using dynamic memory allocation. You free it when you no longer need it, then you can re-use it.

    "The amount of unused memory is not easily "known""

    I realise you are unfamiliar with dynamic memory allocation so I'll explain: When memory is allocated you specify the number of bytes required, and there is a fixed overhead associated with each allocation. So, the amount of memory used *is* easily known.

  • I realise you are unfamiliar with dynamic memory allocation so I'll explain: When memory is allocated you specify the number of bytes required, and there is a fixed overhead associated with each allocation. So, the amount of memory used *is* easily known.

    you realize diddlysquat.

    You are babbling about memory being made available, that was never the issue, the issue is memory available to be made avalilable.

    The amount of memory used is NOT known. if you are in the middle of a function called by a functio called by main you have the globals, the locals in main the locals in func1 and the locals in func2 subtracted from available memory.

    We are discussing available memory NOT how much you try to make avalible with the PC feature used on a '51.

    I ma VERY "familiar with dynamic memory allocation" and DO use it where appropiate, but I would never dream about using it on a device with scarce resources.

    Erik

  • I said:

    "I realise you are unfamiliar with dynamic memory allocation so I'll explain: When memory is allocated you specify the number of bytes required, and there is a fixed overhead associated with each allocation. So, the amount of memory used *is* easily known."

    You replied:

    "you realize diddlysquat.

    You are babbling about memory being made available, that was never the issue, the issue is memory available to be made avalilable."

    Not only are you rude, you are wrong. In the paragraph you quoted I am describing how we know how much memory is used when it is allocated using malloc() and friends. I am *not* talking about "memory being made available".

    "The amount of memory used is NOT known. if you are in the middle of a function called by a functio called by main you have the globals, the locals in main the locals in func1 and the locals in func2 subtracted from available memory."

    The 8051 has distinct memory spaces. In a typical scenario the variables you describe would be located in the 'data' space, dynamic ally allocated memory would be located in the 'xdata' space, hence the memory occupied by automatic and static variables would *not* need to be subtracted from the pool available.

    "We are discussing available memory NOT how much you try to make avalible with the PC feature used on a '51."

    malloc() and friends predate the PC by a long way. They are features of the standard C library, not specific to any one platform.

    "I ma VERY "familiar with dynamic memory allocation""

    Quite frankly that does not seem to be the case.

    "but I would never dream about using it on a device with scarce resources"

    You are blinkered by your lack of understanding.