Hello,
I'm creating an application which runs on Dallas 89C450 microcontroller. My application uses RTX-51 Full RTOS. The versions of the tools which are integrated into my IDE are like below:
C51 Ver: 7.50 A51 Ver: 7.10
My application used to work fine but I was having problems with my available code space. So, I decided to port my application to LX51 Advanced Linker V4.24 in order to use its code packing feature. But my application started malfunctioning after enabling code packing.
What might be going wrong? How can I solve this problem? Using code packing feature is quite important for me because it seems that it would solve my problems with available code space if it worked fine.
Please send me your opinions and advises. This is an emergency case.
Best regards...
while I usually refrain from responding to "hate mail" there is one issue in the above that I will respond to first you state: I know that you can't guarantee exact timing with a high-level language like C Later you say: the vendor of the application development environment has to guarantee that the main operation of my system will not be affected by changing the tools or optimization levels, etc. Otherwise, what does it stand for???
How do you make those two statements go together. I do understand that we have upset your fragile sensibilities by suggesting that the fault is is your code (how awful, your code is, of course, perfect) but every experienced developer has seen examples of "faulty code that happen to work" which, I am 98% sure is the case here.
So, go ahead, sit in the corner, cry and blame the tools or pull up your pants and fix your code.
Erik
I know everything you guys had claimed. I know that you can't guarantee exact timing with a high-level language like C but you should use hardware timers instead.
Do you think that I'm not using timer interrupts within my code, ha? You guys are talking about elemantary things and you think that you're saying something wise.
Do you think that I'm not aware of that I may carry some of the load onto another processor? I don't have any other processor which I can carry some of the load inside the device that I'm working on, ok???
Do you think that I'm not aware of that I may observe the operation using a full-blown ICE? I don't have any ICE and I told it right at the beginning. I'm not very grateful with using a printf debugging strategy, either. But that's all I can do because I don't have any other option.
Do you think that I'm not aware of that I may cancel my all peripheral dependencies and run the application inside the simulator and observe the happenings? I definitely do!!! But I can't use such an option because my processor has to work in cooperation with other processors inside the system which I can't move some of the load. Moreover, it has many peripherals like MIL-STD-1553 chip which you can't simulate inside Keil environment. The operation strictly depends on this peripheral and cancellation of such a peripheral would not give me any idea about the operation of my code on the real hardware!!!
I know as good as you guys that you can't expect the same code result by changing the linker and optimization levels, etc. But the vendor of the application development environment has to guarantee that the main operation of my system will not be affected by changing the tools or optimization levels, etc. Otherwise, what does it stand for???
I was just trying to investigate wheter anybody had faced with such a problem after enabling code packing option provided by advanced linker. And if yes, how they solved it. That's all. But you guys seem to have fun.
Carry on!!!
I couldn't understand why you are acting so that silly.
"It's sheer folly to expect to get the same output when you switch to a completely different Linker and then change the optimisation levels, too!"
What the hell is that, ha? I've just asked a question. But you seem to be making yourself happy by insulting people.
I've changed my linker and optimization level and everything worked fine, genius!!! The problem reveals only when you activate code packing, ok???
"I would like to state that the most likely reason for what the OP is seeing is that defective code happened to work because of something happened late enough not to cause a problem."
See http://www.keil.com/forum/docs/thread8342.asp
To re-phrase: That is, code that was fundamentally flawed, and only appeared to work by pure luck - your luck ended with the change of Linker and/or optimisation level!
eg, you had not properly waited for some necessary condition but, luckily, the code happened to be so slow that this didn't matter. Now, with the shiny new improved high-efficiency super-optimising Linker, your code is faster - and so it stops "working".
It could also go the other way - it's not uncommon that optimising for code size can make it run slower (eg, by converting inline code into subroutine calls). If your code was barely quick enough before, this could make it too slow - and so, again, it would stop working.
first i think Andy is a bit soft
It's sheer folly to expect to get the same output when you switch to a completely different Linker and then change the optimisation levels, too! should be It's sheer folly to expect to get output that even vaguely resemble the original when you switch to a completely different Linker and then change the optimisation levels, too!</>
second, after reading the posts one could get the impression that the 'do not' relates to timing loops and such. I would like to state that the most likely reason for what the OP is seeing is that defective code happened to work because of something happened late enough not to cause a problem.
"Andy's point is that you should never rely on the execution time of the C code itself."
Yes, that's precisely what I meant, even though it wasn't exactly what I said!
"It's unwise to expect the compiler and linker always to produce exactly the same output..."
It's sheer folly to expect to get the same output when you switch to a completely different Linker and then change the optimisation levels, too!
Never, ever do anything timing critical in 'C'!!
I'd say: For timing critical operations in C, use a timer. That's what they're for.
Andy's point is that you should never rely on the execution time of the C code itself. You can do precise timing and write your code in C, but you need a hardware timer to provide a consistent time reference.
There are many, many possible ways to generate machine code that correctly implements any given C source. It's unwise to expect the compiler and linker always to produce exactly the same output, especially when you start changing optimization options.
"Thank you very much. It seems that linker code packing doesn't work pretty well for modules which have a time-dependency just like you claimed."
No - you are missing the point!
The point is that high-level programming languages (eg, 'C') give you absolutely no guarantees at all about timing. Often, they don't even guarantee that things will be done in exactly the order that you might infer from the source code.
"how can I be sure that it will not be there to catch me out next time when I made a new modification to my code?"
For Jon Ward,
You are right. I was deactivating code optimization for only one task which has a time dependency. But when I deactivated code optimization for the entire module which contains that task function, everything started working fine again. Thank you very much. It seems that linker code packing doesn't work pretty well for modules which have a time-dependency just like you claimed.
But, how can I be sure that it will not be there to catch me out next time when I made a new modification to my code?
Best regards,
My application used to work fine but I was having problems with my available code space
the solution is easy:
get rid of the RTOS
After enabling 9th optimization level and code packing, it seems that a line is not being executed in one of my tasks.
That's probably a misinterpretation. The way opt 9 and higher work means that it becomes impossible to find out whether a given line of code is executed just from stepping through the C source code in a debugger. You have to look at the assembly code.
But I would expect the code to crash entirely because of a stack overflow.
That expectation means you have some serious misconceptions about how the 8051 architecture works. The 8051 stack mechanism doesn't have a solid wall it'll run into and crash.
There is only so much code that you can cram into a 64K address space!
The higher Linker optimisations are clever, but they aren't magic - if your code is too big, then it just won't fit, I'm afraid.
I have seen the kind of symptoms you describe on a very full system - the answer was just to get rid of some excess code.
The trouble with printf debugging is that it just adds to the bloat by adding all those strings into the code space! :-(
Dont't you have JTAG access for debugging?
Can't you move some stuff onto the other processors?
Is it possible that I'm facing with a stack overflow problem. I couldn't find any option in IDE to detect a stack overflow failure.
I also installed latest release of uVision 3 (C51 ver 8.05 etc.) but unfortunately nothing has changed. What else would you offer?
I want to give you a more detailed information about my application. My target processor is a Dallas DS89C450 which has an on-chip 64KB program memory. It doesn't support using an external stack. Unfortunately, 256-byte data section of the microctontroller has to be used as a stack region. I'm using RTX51-Full V7.01 real-time executive for my multitasking purposes. My application is comprised of different operating system tasks devoted to special duties. Some portion of this RAM region is also used by this real-time operating system. Thus, I'm placing my local and global variables on an external RAM chip.
The available on-chip code memory is about to get full. So, I decided to change my linker to LX51 in favor of its better optimization levels and code packing option. But I started having problems with the operations of the system which has to be accomplished by the microcontroller that I mentioned.
The microcontroller is working on a device and cooperating with other two 8051-based microcontrollers located inside this device.Therefore, it doesn't seem to be able to use a full-blown ICE to trace the operation of the code in the run-time. But instead, I'm using a "printf" debugging strategy on a serial line. After enabling 9th optimization level and code packing, it seems that a line is not being executed in one of my tasks. But this line was executed before enabling LX51 and code packing. Moreover, all other lines and functions continue executing properly. After enabling 11th optimization level and code packing, abnormalities get even higher. Can this phenomenon happen due to a stack overflow? But I would expect the code to crash entirely because of a stack overflow. I would be appreciated if I could solve and learn the reason of this event that I had observed.
Thank you for your interest.
system starts not passing some of the lines in my code which it used to pass in the past.
Sorry, but that's still a lot too vague.
So you're doing "printf() debugging". That may not be sufficient for this kind of problem. You need considerably finer-grained control --- possibly even a full-blown emulator. Actually, with a problem like that, I'd probably try to boil it down far enough to fit in the simulator (i.e. remove all dependence on external peripherals), and try it in there.
View all questions in Keil forum