If Optimize 8 (Reuse Common Entry Code) detects void MyFunc() as the last statement within void OuterFunc() (for example), it will produce an LJMP to MyFunc() rather than an LCALL. (see examples) Nice optimization, but MyFunc() is part of a library written in assembly and by design reuqires a call rather than a jump. I tried surrounding its prototype with a #pragma OT (7), but that doesn't work. Surrounding OuterFunc() with the OT (7) pragma works, but that's a bit cumbersome and runs the risk of someone forgetting to do it.
void OuterFunc( void ) { MyFunc(); // Compiler generates LJMP. Not good :o( } #pragma OT (7) void OuterFunc( void ) { MyFunc(); // Compiler now generates the necessary // LCALL, but programmers WILL forget to use // this construct and will blow the whistle // when their software bombs. } #pragma OT (8)
That's patently untrue, and I think Robert did a pretty good job of showing why. The "function" he calls doesn't return... ever... period, so he is absolutely correct in saying so. If you could be bothered to actually read what I wrote before firing replies from the hip, you might have noticed that the function he calls (i.e. "IndirecJmp") is not the one I was talking about in the snippet you replied to: For the stack contents to matter at all, the function you ljmp @a+dptr to has to return. When a CALLed subroutine LJMPs to some other one, and that one returns, that's effectively the same thing as the original called function returning. This idea can be applied recursively as many times as you like. Indeed that's exactly the idea the compiler's CALL+RET --> LJMP optimization itself is exploiting. And, at the risk of sounding repetitive: this is not what is causing the actual problem.
When a CALLed subroutine LJMPs to some other one, and that one returns, that's effectively the same thing as the original called function returning. This idea can be applied recursively as many times as you like. Indeed that's exactly the idea the compiler's CALL+RET --> LJMP optimization itself is exploiting. I'm not sure where all this confusion is coming in, but your statement is exactly what Robert is trying to overcome. The library function he's talking about is designed to assume that it has been CALLed, because it modifies the stack. So.. when a situation arises where the compiler decides to make this optimization, the library function destroys the stack. What Robert wanted to do (and I think succeeded about 10 messages ago in doing) is force the compiler not to perform this optimization in the case of this library routine. How we got off on some ridiculous tangent like he feared, I'm not certain, but I am quite sure that a programmer as competent as you understands what he's trying to do, so I don't know where the whole debate is coming from. I understand your main point: If you write the library funnction not to care whether it's called or ljmp'd to in the first place, then this problem is avoided, but I can think of plenty of reasons why he might not, at this point in time, be able to modify the assembly routine. I'm sure that's the case, and so debates over whether the original author of the assembly routine did a horrible job or not become moot.
Jay, I think you understand exactly what I'm trying to accomplish and, as you stated, I found the solution quite some time ago. The technique is identical to the old _chain( old_vector ) that was available in DOS programming and is necessary when you have more than one interrupt handler servicing the same interrupt line ( EX0, for example) and the ISRs have to be chained in order route service to the appropriate area. I won't even dare discuss my interrupt scheme. Can you imagine the tangent that would invoke ??? How we got off on some ridiculous tangent like he feared hmmmm...does this mean I'm Edgar Casey ??? Anyway, I thank everyone who actually contributed ideas to my question.