Hi all, In RTX we can assign a function as a task with the _task_ #NO tag. When calling os_create_task we just pass this task number as a parameter. Assume one has to use banking, in this case, how does RTX identify, in which bank the function exists ? from where does rtx get information about the banks of any task at runtime ? regards pachu
"Even if the more powerful processors sell for HALF the price of the 8051, you can't amortize out the extra development time for learning the architecture unless you sell ALOT of products" This is a good point. However, you also have to consider the savings with the more powerful processor from not having to mess about with code banking. And we know that the code banking is causing extra issues - otherwise the original question would never have arisen!! You're also assuming that the entire cost of learning ARM (or whatever) would have to be amortised over a single project. Considering ARM specifically, I'd consider that an investment rather than a pure cost: I think knowing both ARM and 8051 would be a definite advantage! The 8051 must be the most widely available multi-sourced 8-bit architecture; and I think ARM must have a similar position for 32-bits. Definitely a powerful combination.
I do not think I have "missed the point". Perhaps not now, but you certainly did when I made this inital suggestion and you kindly quoted me a Digi-Key price for an ARM processor. You are right that there will be "extra development time for learning" but that will be balanced with "less time for implementing and debugging" The notion that implementation and debugging on a completely new architecture will be faster than a more concerted effort on a known processor is dubious as best. I'll run with it for you though. In your above example you did not include time for making things behave in time if a '51 was used, which arguably is one of the most difficult debugging tasks. A bit more math: Assume that debugging will be a horrendous chore on the 8051 and he'll have to use a whopping 100Hrs after the project's been "finished" to debug it and get things working. We'll also assume that by some miracle due to the ease-of-use in the new processor, he'll be TWICE as efficient at debugging and it will take only another 50Hrs. 8051 Cost = ((1000 + 100)*50) + (7.76 / unit) ARM Cost = ((1100 + 50)*50) + (6 / unit) This yields a break-even point in unit sales of 1420. Also, keep in mind that this continues to run on the assumption that the ARM processors are actually cheaper than the 8051's for a given quantity. I'm not sure if that's true. The upshot is this: We can keep pulling rather ethereal "efficiency" notions up when it comes to new processors, but eventually someone has to make some hard estimates about times, costs, etc. to make an informed decision. Just basing a choice on a percieved improvement in style is the reason engineers are rarely allowed to make business decisions.
You've missed my point. Even if the more powerful processors sell for HALF the price of the 8051, you can't amortize out the extra development time for learning the architecture unless you sell ALOT of products. ... Now assume ... that this is at least a moderately complex project. I do not think I have "missed the point". You are right that there will be "extra development time for learning" but that will be balanced with "less time for implementing and debugging" In your above example you did not include time for making things behave in time if a '51 was used, which arguably is one of the most difficult debugging tasks. With a combined complexity of RTOS, banking and floating point on a '51, I consider it highly unlikely that timing problems will not pop up. Erik
You cannot access more than 64k of xdata without *some* overhead. The 8051 is not designed to address more than 64k. According to your philosophy you have used the wrong microprocessor for the job. No, I have not - even by my own philosophy. I only need access to about 25 bytes of the >64k data about once an hour. The reason that I do not use banking with its overhead is that the rest of the time I run "true '51" aka hauling @$$. Had I needed that data more often I would not have used a '51 derivative. Erik
Digi-key sell Philips ARM processors at qty 1 priced $6 and up You've missed my point. Even if the more powerful processors sell for HALF the price of the 8051, you can't amortize out the extra development time for learning the architecture unless you sell ALOT of products. For instance, take the following math: Digi-Key 87C51FB33 @ qty. 1 = $7.76 Digi-Key ARM @ qty. 1 = $6.00 (I don't know ARM stuff, so I'll use your number) Now assume (since the OP wants to use an RTOS) that this is at least a moderately complex project. Assume that it will take 1000Hrs of programming time for the 8051 architecture with which the OP is familiar. Also assume that the RTOS price for both ARM and 8051 is the same (which might not be true). Also, we'll assume that the OP is a dynamite embedded systems guy and will suffer only a 10% degradation in efficiency switching to a new processor which means that it will take him 1100 Hrs. to complete the project with ARM. Let's just pick some reasonable billing rate for an embedded system engineer (I guess $50/Hr is likely low, but I'll be generous for my purposes here). Then, if we assume that the entire system is the same except for the processor selected and compare only incremental costs, we have this: 8051 Cost = (1000 * $50) + ($7.76 / unit) ARM Cost = (1100 * $50) + ($6 / unit) So, a break even analysis is as follows: (1000*50)+(7.76*units) = (1100*50)+(6*units) which yields units=2840 SO if all of these assumptions are correct (and I think they're reasonable), and even if we assume that the ARM cost is $1.76 less than the cost of the 8051, the 8051 makes the most money for the developers company unless he plans to sell more than 2,840 units over the product life-cycle.
"Nobody said anything about data exceeding 64k." Er, yes *I* did. I was drawing a parallel between using more code space than the 8051 was designed to address and more xdata space that the 8051 was designed to address. "I have such, but access it by other means than Keils "banking" with associated overhead" You cannot access more than 64k of xdata without *some* overhead. The 8051 is not designed to address more than 64k. According to your philosophy you have used the wrong microprocessor for the job.
I think the point is that more powerful processors may still be an order of magnitude more expensive if you're not talking about some ultra-high-volume consumer electronics producth Digi-key sell Philips ARM processors at qty 1 priced $6 and up Erik
OOPS "But there is no point in using code banking unless you've exceeded the 8051's address space - so that is inherently an "overload" of sorts!" Do you consider using more that 64k of NV xdata storage an 'overload'? Nobody said anything about data exceeding 64k. I have such, but access it by other means than Keils "banking" with associated overhead Erik
as said before "the '51 PC" may have been realistic when more powerful processors were an order of magnitude more expensive. I think the point is that more powerful processors may still be an order of magnitude more expensive if you're not talking about some ultra-high-volume consumer electronics product. If the OP is making something for industry where he's going to sell only tens of units per year, and he is only really confident with the 8051, then he will simply not be able to amortize away the extra development time and learning curve climbing that he'll have to do to learn a new processor. Further, if the 8051 will do everything acceptably, then there's no reason for him to even attempt so. I try to squeeze in some learning about new processors whenever I can, but sometimes you've got a timeline that suggests you work with what you know.
Yes, RTX51 and RTX51 Tiny save the current code bank in the Task Control Block (TCB). Reinhard
"But there is no point in using code banking unless you've exceeded the 8051's address space - so that is inherently an "overload" of sorts!" Do you consider using more that 64k of NV xdata storage an 'overload'? "if you've decided that a 'small' controller like an 8051 is suitable for your project, I can't really see why you should then need to use an RTOS." You may not *need* to use an RTOS. Using one, however, may be a convenient and effective way of coding a given application.
"A microprocessor is a numbercruncher that should react in less than a second, a microcontroller is a device that shall react NOW." 97.3% of statistics are made up on the spot. 98.2% of definitions of the words 'microcontroller' and 'microprocessor' also appear to be made up on the spot. One thing that neither a microprocessor or microcontroller can do is respond "NOW". I assume you're familiar with interrupt latency? "If you want to mislabel the '51 as a microprocessor (which it aint) you can state all kinds of things from that premise." I'm not really interested in labelling it as either as the distinction between the two has become hopelessly blurred.
"All of these sound like pretty good reasons to me." Well, potentially good reasons...! ;-) <cynic>But you would say that - you've got an RTOS to sell...!</cynic> ;-) I didn't say it's impossible - just seems pretty improbable to me.
so, as I said earlier, "if you've decided that a 'small' controller like an 8051 is suitable for your project, I can't really see why you should then need to use an RTOS." Here are a few reasons... 1. The RTOS solves a problem that you can't or don't want to code (possibly incorrectly) my way around. 2. You will save development time that you can put to better use elsewhere. 3. The MCU has enough horsepower to run the RTOS and your application. 4. The complexity of the 8051 (with an RTOS) is simpler than some other architecture. 5. You have experience with the 8051 and RTOS. All of these sound like pretty good reasons to me. Jon
amen
View all questions in Keil forum