XLARGE memory model & generale performance

Hello community,

Using C166 / ST10 with last version of IDE (µVision V4.00a / C166 V7.00 / L166 V5.25)
We need between 100 & 200Kb data (composed of many structures / array), so we set XLARGE memory model.

General questions about this use :
Is it the best way ?
What's approximately the loss in code execution performance ?
Shall we must use the x... functions to prevent some bug from standard lib with segment boundary (e.g. xmemcpy from string.h) ?
Is there an other solution, because it's non standard functions ?

Thanks in advance.
JM

Parents
  • We need between 100 & 200Kb data (composed of many structures / array), so we set XLARGE memory model.

    That doesn't seem like sound reasoning. It's almost always preferrable to use the smallest possible memory model, then flag individual big variables (or types that are use for those) to go into the bigger memory space until it all fits just so.

    One argument supporting that strategy is that access to big objects already tends to carry most or all of the overhead coming with using large memory spaces anyway, so there's little extra price to pay for storing them there.

Reply
  • We need between 100 & 200Kb data (composed of many structures / array), so we set XLARGE memory model.

    That doesn't seem like sound reasoning. It's almost always preferrable to use the smallest possible memory model, then flag individual big variables (or types that are use for those) to go into the bigger memory space until it all fits just so.

    One argument supporting that strategy is that access to big objects already tends to carry most or all of the overhead coming with using large memory spaces anyway, so there's little extra price to pay for storing them there.

Children
More questions in this forum