I'm wrapping up some product code on a modern 8051.
I actually surprised myself, because I ran out of memory in the data area.
I need to check with the manufacturer because supposedly this chip has 256 bytes of RAM in both the IRAM and XDATA. I have to dig through the device setup files from the manufacturer, because it isn't like an ARM where you specify the address ranges in KEIL directly.
I understand legacy 8051s had an external memory source in some cases, that you could access using the XDATA syntax. I understand the architecture issue to a degree since I've been building a mock 8 bit MCU in Verilog.
On a contemporary 8051, is it really that big of a performance to use XData area for variables?
Am I correct this is really a micro-optimization in terms of system gain? Like fractions of microseconds (µs) difference, or is it worse?
I'd even guess, that if I just put every variable I have into XDATA, my interrupt timing might literally crash and burn for my required sampling rate at 760(µs). You'd certainly have to revisit and confirm the timing. basically we all go by the rule: most used variables in DATA, least used variables in XDATA in between in IDATA
8MHz is blazing fast! I wish we could run the code at 250KHz... I feel like every doubling of the clock speed, I see ~ .3mA of current increase. Every .3mA matters when you have a 300mAh battery. I am sitting here baby sitting every peripheral to minimize current drain. I know of no current '51 derivative that can't handle 24MHz, silabs has oneclockers that run at 100 MHz
also many (most) modern derivatives use less that the "steam driven" 12 clocks per cycle.
if you are looking for blazing speed look for a SiLabs f5xx which run about 100 times faster ( 1 clock cycle/100MHz) than the original '51
Yeah my MCU could run up to 32MHz using the on-board oscillator. I'm the nut job who is profiling the system down to the last dotted i, to minimize current consumption.
I totally hear you on the DATA, IDATA, XDATA front. In a pinch, I could re-factor the software and prioritize variables.
(I took pity on myself and did speed up the ADC sampling, after like 6 hours of battling the ADC readings. Frigging tri-stated output on a LiPo charging IC vs. flaky on-board ADC. Put two crappy ICs together, and voila, a giant *** sandwich [pardon the language]..).
It's unclear from the Taiwanese tech support if this 8051 MCU is the updated instruction set / 1 clock operation. They claim so, but it doesn't seem like it when I am setting up timers.
This Asian MCU is a real dog (I don't want to call out the brand, since I actually have been to Taiwan to meet guys who work for the MCU manufacturer, nice people kind of a so-so MCU, but you get what you pay for).... SiLabs would be a dream.
---
I've gotten surface-level deep with Cortex M0 / M3. I'd like to power profile a Cortex M0 vs. the updated SiLabs 8051s and see how this 76us task plays out from a current consumption perspective. That learning curve is a bit nasty on those Cortex MCUs.
Appreciate those manuals!
Yeah my MCU could run up to 32MHz using the on-board oscillator. I'm the nut job who is profiling the system down to the last dotted i, to minimize current consumption. think! you say you idle the \processor, if you run 24 MHz instead of 8MHz you use 3* the current 1/3 the time equals the same
It's unclear from the Taiwanese tech support if this 8051 MCU is the updated instruction set / 1 clock operation. They claim so, but it doesn't seem like it when I am setting up timers. timers are not related to instruction cycle time
SiLabs bought Energy Micro and thus have the power stingiest cortex'es.
The learning curve is not that bad if you use the manufacturers "bring up packages"
Yeah, I agree at 32MHz the system should be at optimal current consumption with idling... the electrical characteristics of the MCU state the same.
I actually started with the core running at 32MHz.
For some reason with this particular task and whatever overhead comes with the MCU coming out of idle, it turned out that running the MCU at 8MHz + Idling resulted in an improvement of current consumption around 0.4mA vs running the same task at 32MHz + Idling.
(So I got to re-wire all the interrupt timing, always fun).
I can't even start to speculate why this is the case, but the measured results are definitely valid.
I posted this line of thought on StackExchange actually for Cortex M. Mixed bag it seems for other users in terms of their testing. It seems most cases "burst processing" is optimal.
I remember reading an APP note from Microchip for Power Tips, and they called out burst processing vs. reducing clock speed. The conclusion was "it depends"