Hi, I am running UART solution on 8051. There is only UART interrupt handler and no other interrupts. All code runs in polling mode, having 4 TASK and using minimal OS (RTX)
There are two specific functions which randomly gets called sometimes. These functions are system initialisation functions and are not executed in any path during traffic test. so surely, there is corruption happening, which causes these functions to get called randomly.
I want to understand what are tools/methods to follow to find out root cause for such corruption symptoms. i would like to have suggestion for best way to debug this issue on 8051 platform.
Best Regards. Thanks for your time.
-Rajan Batra
"The cost of a die is related to area and a '51 with 2k Flash takes a lot less space than an ARM with 8K."
This is only true when the two chips are implemented using the same technology.
But just as the 32-bit processors normally have a core with much smaller individual transistors because of a newer process technology, they also have their flash region implemented in a newer technology. So 8kB of flash in a brand new Cortex-M0 consumes much less die area than 2kB of a "normal" 8-bit processor. Same with RAM - the average 32-bit processor can have more flash and more RAM while still consuming less die space.
Starting with a 32-bit addressable memory range means the chip manufacturer can have a range of compatible processors from variants with quite little memory and few pins and up to heavy-duty chips with lots and lots of peripherial pins, peripherial devices and memory. So the total design costs can be shared, while giving the customers a wide range of chips. Which also means that the companies designing in the chips knows that they can reuse their software and their hardware when later releasing luxury editions of their products.
So even if the 4-bit or 8-bit processor has less transistors, you can normally still come ahead in cost, size, power, ... with a 32-bit processor choice. Volume is after all the main driving factor when it comes to costs. And with 32-bit processors able to also cover 8-bit tasks, you just don't get the volumes in the 8-bit market except for very specific chips. Not too many companies has the volumes where they can call a chip manufacturer and say that they need 100 million custom-adapted chips.
If implementing a lamp timer, any processor architecture can be used. A bike computer? Same there - you are free to chose 8-bit or 32-bit and the price will still be low enough and the power consumption low enough. But the 32-bit choices will have a higher reusability factor, because the smaller transistors used because of the newer manufacturing processes means the 32-bit choice can throw in 10k extra transistors for peripherial functionality to use - or not use - as needed. The extran 0.001 mm2 of die space doesn't matter compared to the die space consumed by the I/O transistors + bonding pads. And they do not consume extra power because they are only powered up if the extra serial port or extra timer or extra DAC is actually enabled and used.
It's quite a number of years since "total number of transistors" actually mattered "on the outside". Within a specific family of chips, you pay more for more transistors, because the chip vendor wants more money for the big brother. But if you instead compare between different architectures, then "price per transistor" breaks down. The chip cost isn't based on number of transistors, but on "how much can I charge and get the market I want/need?". And when comparing between different families or architectures, the "power per transistor" also breaks down. Because manufacturing processes means so much more than the actual number of transistors implementing the core, the peripherial logic and the memory blocks.
Another factor here because the core transistors costs so little in both die size, fabrication cost and power cost, compared to the costs of the I/O pins, is that it's possible to include additional processor cores for almost the same cost. Which allows a "microcontroller" to still get a slave "I/O controller". The interprocess communication can be done using tiny 1.2V core transistors instead of bulky I/O-pad transistors. So suddenly you can get hard real-time for specific I/O needs while still having 90% of the actual software written using a software design that need not worry about the real-time requirements. All because of two interrupt controllers, and two PC + register banks. And this allows the 32-bit processors to claim even more market shares and split the development costs over even more sold pieces, making it even more attractive to move the production to newer fab lines with even lower transistor sizes, capacitances and trace distances.
So in the end, a 8051 with 8kB of flash is more expensive than a 8051 with 2kB of flash. But you might get a 32-bit ARM with 16kB of flash for the same price - while getting full 32-bit timers and baudrate generators that generates correct baudrates for whatever crystal you select thanks to fractional division.
And a program custom-designed for a 8051 is more expensive than a program written for a general-purpose processor where it's possible to use a "driver layer" to separate processor-specific code from business logic. So the same code can continue to live 20 years later and having been used in a number of different processors from different vendors, in different families and often using different architectures.
It's easy to think "this processor fits perfectly for this product". But it's almost impossible to predict how the market will move, and what requirements there will be on the product revision 2 or revision 3 or revision 4. So a product might have started with RS-232. Then moved to RS-422. Then Ethernet. Then wireless. Code written for a 32-bit processor is easier to move between processors than code squeezed into a "perfect fit" 8-bit processor.
Per,
So in the end[smaller geometry causes], a 8051 with 8kB of flash is more expensive than a 8051 with 2kB of flash. you totally ignored: "Many of the 'combined' chips need a larger geometry for various reasons" for instance a USB driver must 'flip' 5V and, maybe, a bit of logic in the driver chip would make sense
And a program custom-designed for a 8051 is more expensive than a program written for a general-purpose processor where it's possible to use a "driver layer" to separate processor-specific code from business logic. Maybe not the best example, but, if I design a separate chip sensor (requiring large geometry) with a bit of code the above is totally invalid
you keep referring to 'general purpose' where I refer to 'single purpose'
No, I use General Purpose because I mean General Purpose.
The traditional ARM microcontrollers are not "one purpose" chips. They have a general I/O peripherial setup allowing the same chip to be sold for use in a very wide number of applications. People only power up the features that are needed - but the core logic doesn't really need any extra space because it is using the internal low-voltage power domain.
Most ARM chips already have power converters. They may be driven by 3.3V or 5V but internally steps down that voltage to a much lower voltage for the actual logic. Yes, the power converter takes some space, but the advantage is that the core logic instead takes almost zero space because of the much smaller geometries. It's just the I/O pad trandistors that are large - and they are the same size whatever geometry you use of a 4-bit or 8-bit or 32-bit processor. Their size is directly related to how much ESD-protection you want and how high current drive/sink support you want.
What you might miss here, is that when using really small geometries, the engine of a CAN controller or a USB controller does't really take much more space than a lowly UART. And when special electrical hardware is needed, you normally use an external circuit for handling the CAN wire or Ethernet wire - especially since you want separate protection and EMI filtering when using special I/O. But for most microcontrollers, the actual I/O pins has the same internal electronics if the pin is just GPIO or if the pin can be switched from GPIO into UART, USB, Ethernet, CAN, ... It's basically only I2C and ADC pins that may have different circuitry. And there, the needs are the same for a 4-bit processor or a 32-bit processor.
and, if it was general urpose, I'd most likely use an ARM
but for special purpose where the geometry may be determined by the sensing element the advantage of using a general purpose processor is NIL
Remember that no 90nm microcontroller has all parts in 90nm. That is only for the digital processing domain inside the protective barriers of the I/O-pad transistors and the analog circuitry, and run at a different (almost always internally generated for microcontroller-class chips) voltage compared to the analog circuitry or the I/O pins.
It's a very long time since they learned how to combine analog or high-current circuitry with fine geometry digital core logic.
And the analog sections are normally powered by a separate set of VCC pins, something that isn't as common with traditional 8-bit processors.
There are extremely few situations where the environmental/sensory requirements doesn't allow - and can take advantage of - a high-end small-geometry core to be thrown in with large quantities of highly adaptive peripheral logic.
It's mostly when you embed a processor core in another circuit that you can't afford to use a modern, fine-pitch process and instead have to settle for a processor core that is "compatible" with that existing circuitry.
for the 'special' processing (i.e. an interface/sensor/controller circuit with some processing capabiliy making it 'flexible') purposes, who gives a hoot which processor architecture is used since the code, in effect, is neither software or firmware but hardware, because, after programming such a chip is specified in a way indistinguashable from a processorless chip. Also since such code is so close to the hardware portability is a moot issue.
But if you note here, we obviously have situations where the processor is "don't care". Those situations aren't what are meaningful to debate.
The debate is when there is a reason to care. And then it's very hard to find situations where the 8-bit controllers actually gives any advantage even if they might have less # of transistors. Because the transistor count isn't the metrics to decide the price or the power consumption.
But if you note here, we obviously have situations where the processor is "don't care". Those situations aren't what are meaningful to debate. now we are getting somewhere, previously you have argued the ARM regardless whereas I have argued "whatever fit the app"
I do not know where power consumption entered the picture, with the exception of battery operated units, far more power is lost in the supply than in the processor, so a few mW makes no difference.
Per, I feel we basically agree, as seen im one of my posings above "for general purpose I'd probably use an ARM" I have just tried to avoid anyone getting the impression "only 32 bit makes any sense in any case"
Erik
Note that "whatever fit the app" is normally not a good way to select processor. The processor is normally only "don't care" in situations where it is already fitted to a device - like your example where the sensor already have a processor fitted.
Almost any processor can solve almost any problem - but some processors will solve it better or cheaper or allowing more design choices. And this is where a 8051 chip can almost never represent a better choice. The need to keep down the transistor count to an absolute minimum means it has a number of design issues compared to most other 8-bit architectures, while the 32-bit world has dropped enough in price to force lots of 8051 manufacturers to kill off their 8051 offerings and instead go for Cortex chips. And the bit-banding allows lightning-fast one-bit operations.
Power comes in because power is almost always be an issue to consider. High-end devices often needs to consider the temperature - especially if they need to operate over extended temperature ranges. While a huge amount of devices are now battery-operated. And lots of devices needs a real-time clock that should run for extended times without power connected.
So my notes about power earlier was because the price and power consumption was for a long time important reasons why 8-bit processors so often was way better choices. Today both 8-bit and 32-bit processors can keep operating for 10+ years on a small coin cell. And the price of 32-bit processors have dropped enough that it's often the implementation cost that is more important.
Note that "whatever fit the app" is normally not a good way to select processor. an old adage goes "if you heard what I thought I said you would understand me", so let me expand the above to "whatever fit the app best" it is impossible to give general rules for 'best'. Production volume affect the weighing between development cost and unit cost, power can be an issue, tooling cost can be an issue .....