Hi, I am running UART solution on 8051. There is only UART interrupt handler and no other interrupts. All code runs in polling mode, having 4 TASK and using minimal OS (RTX)
There are two specific functions which randomly gets called sometimes. These functions are system initialisation functions and are not executed in any path during traffic test. so surely, there is corruption happening, which causes these functions to get called randomly.
I want to understand what are tools/methods to follow to find out root cause for such corruption symptoms. i would like to have suggestion for best way to debug this issue on 8051 platform.
Best Regards. Thanks for your time.
-Rajan Batra
No, I use General Purpose because I mean General Purpose.
and, if it was general urpose, I'd most likely use an ARM
but for special purpose where the geometry may be determined by the sensing element the advantage of using a general purpose processor is NIL
Remember that no 90nm microcontroller has all parts in 90nm. That is only for the digital processing domain inside the protective barriers of the I/O-pad transistors and the analog circuitry, and run at a different (almost always internally generated for microcontroller-class chips) voltage compared to the analog circuitry or the I/O pins.
It's a very long time since they learned how to combine analog or high-current circuitry with fine geometry digital core logic.
And the analog sections are normally powered by a separate set of VCC pins, something that isn't as common with traditional 8-bit processors.
There are extremely few situations where the environmental/sensory requirements doesn't allow - and can take advantage of - a high-end small-geometry core to be thrown in with large quantities of highly adaptive peripheral logic.
It's mostly when you embed a processor core in another circuit that you can't afford to use a modern, fine-pitch process and instead have to settle for a processor core that is "compatible" with that existing circuitry.
for the 'special' processing (i.e. an interface/sensor/controller circuit with some processing capabiliy making it 'flexible') purposes, who gives a hoot which processor architecture is used since the code, in effect, is neither software or firmware but hardware, because, after programming such a chip is specified in a way indistinguashable from a processorless chip. Also since such code is so close to the hardware portability is a moot issue.
But if you note here, we obviously have situations where the processor is "don't care". Those situations aren't what are meaningful to debate.
The debate is when there is a reason to care. And then it's very hard to find situations where the 8-bit controllers actually gives any advantage even if they might have less # of transistors. Because the transistor count isn't the metrics to decide the price or the power consumption.
But if you note here, we obviously have situations where the processor is "don't care". Those situations aren't what are meaningful to debate. now we are getting somewhere, previously you have argued the ARM regardless whereas I have argued "whatever fit the app"
I do not know where power consumption entered the picture, with the exception of battery operated units, far more power is lost in the supply than in the processor, so a few mW makes no difference.
Per, I feel we basically agree, as seen im one of my posings above "for general purpose I'd probably use an ARM" I have just tried to avoid anyone getting the impression "only 32 bit makes any sense in any case"
Erik
Note that "whatever fit the app" is normally not a good way to select processor. The processor is normally only "don't care" in situations where it is already fitted to a device - like your example where the sensor already have a processor fitted.
Almost any processor can solve almost any problem - but some processors will solve it better or cheaper or allowing more design choices. And this is where a 8051 chip can almost never represent a better choice. The need to keep down the transistor count to an absolute minimum means it has a number of design issues compared to most other 8-bit architectures, while the 32-bit world has dropped enough in price to force lots of 8051 manufacturers to kill off their 8051 offerings and instead go for Cortex chips. And the bit-banding allows lightning-fast one-bit operations.
Power comes in because power is almost always be an issue to consider. High-end devices often needs to consider the temperature - especially if they need to operate over extended temperature ranges. While a huge amount of devices are now battery-operated. And lots of devices needs a real-time clock that should run for extended times without power connected.
So my notes about power earlier was because the price and power consumption was for a long time important reasons why 8-bit processors so often was way better choices. Today both 8-bit and 32-bit processors can keep operating for 10+ years on a small coin cell. And the price of 32-bit processors have dropped enough that it's often the implementation cost that is more important.
Note that "whatever fit the app" is normally not a good way to select processor. an old adage goes "if you heard what I thought I said you would understand me", so let me expand the above to "whatever fit the app best" it is impossible to give general rules for 'best'. Production volume affect the weighing between development cost and unit cost, power can be an issue, tooling cost can be an issue .....