This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

RTX Code Banking

Hi all,
In RTX we can assign a function as a task with the _task_ #NO tag.

When calling os_create_task we just pass this task number as a parameter.

Assume one has to use banking, in this case,
how does RTX identify, in which bank the function exists ?

from where does rtx get information about the banks of any task at runtime ?

regards
pachu

Parents
  • I do not think I have "missed the point".

    Perhaps not now, but you certainly did when I made this inital suggestion and you kindly quoted me a Digi-Key price for an ARM processor.

    You are right that there will be "extra development time for learning" but that will be balanced with "less time for implementing and debugging"

    The notion that implementation and debugging on a completely new architecture will be faster than a more concerted effort on a known processor is dubious as best. I'll run with it for you though.

    In your above example you did not include time for making things behave in time if a '51 was used, which arguably is one of the most difficult debugging tasks.

    A bit more math:

    Assume that debugging will be a horrendous chore on the 8051 and he'll have to use a whopping 100Hrs after the project's been "finished" to debug it and get things working. We'll also assume that by some miracle due to the ease-of-use in the new processor, he'll be TWICE as efficient at debugging and it will take only another 50Hrs.

    8051 Cost = ((1000 + 100)*50) + (7.76 / unit)
    ARM Cost = ((1100 + 50)*50) + (6 / unit)

    This yields a break-even point in unit sales of 1420. Also, keep in mind that this continues to run on the assumption that the ARM processors are actually cheaper than the 8051's for a given quantity. I'm not sure if that's true.

    The upshot is this: We can keep pulling rather ethereal "efficiency" notions up when it comes to new processors, but eventually someone has to make some hard estimates about times, costs, etc. to make an informed decision. Just basing a choice on a percieved improvement in style is the reason engineers are rarely allowed to make business decisions.

Reply
  • I do not think I have "missed the point".

    Perhaps not now, but you certainly did when I made this inital suggestion and you kindly quoted me a Digi-Key price for an ARM processor.

    You are right that there will be "extra development time for learning" but that will be balanced with "less time for implementing and debugging"

    The notion that implementation and debugging on a completely new architecture will be faster than a more concerted effort on a known processor is dubious as best. I'll run with it for you though.

    In your above example you did not include time for making things behave in time if a '51 was used, which arguably is one of the most difficult debugging tasks.

    A bit more math:

    Assume that debugging will be a horrendous chore on the 8051 and he'll have to use a whopping 100Hrs after the project's been "finished" to debug it and get things working. We'll also assume that by some miracle due to the ease-of-use in the new processor, he'll be TWICE as efficient at debugging and it will take only another 50Hrs.

    8051 Cost = ((1000 + 100)*50) + (7.76 / unit)
    ARM Cost = ((1100 + 50)*50) + (6 / unit)

    This yields a break-even point in unit sales of 1420. Also, keep in mind that this continues to run on the assumption that the ARM processors are actually cheaper than the 8051's for a given quantity. I'm not sure if that's true.

    The upshot is this: We can keep pulling rather ethereal "efficiency" notions up when it comes to new processors, but eventually someone has to make some hard estimates about times, costs, etc. to make an informed decision. Just basing a choice on a percieved improvement in style is the reason engineers are rarely allowed to make business decisions.

Children
No data