We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Hi, We have one application which was developed in c51. By applying all aptimization techniqs provided by compiler, code size is near to 20k. If we change total code to Assembly from C, how much code size can be reduced?. Please provide me ur inputs.... Ramesh
memory=cheap man hours=expensive Only when your volume is low. Our company sells product by the hundreds of thousands or millions. It would pay to have three or four engineers devoted full time for a year just to shave a dollar off the RAM cost. (Not that it would take anywhere near that much effort.) I agree with earlier posters that it's worth exploring ways to write C that compiles to a smaller space. On a large scale, you're not going to beat the compiler by writing assembler by hand -- at least, not by any amount that you couldn't have reached by doing similar work on the C source itself. Good assembler can be smaller than bad C, but that's an apples-to-oranges comparison. It helps to study the generated code from the .lst files, see what's taking up space, and then working on those parts. Some more specific suggestions: - avoid xdata access. It's very expensive to put variables in xdata; it takes a lot of instructions to load up the DPTR and move variables in and out. If you use the large memory model, beware of temporaries and long parameter lists that spill out into xdata. - Use data memory for the "stack", keeping your locals there. - write small routines. Linker optimizations will eliminate a lot of common code, but you can go ahead and make them named subroutines. Sometimes even a single line of code is worth making a routine. An 8051 takes more instructions per line of C than you might be used to with modern processors, and so smaller bits of code are good targets for reuse by making them functions. - experiment with caching intermediate results. Sometimes the compiler is good about this, sometimes not, and you can save space by using some intermediate variables. - experiment with caching dereferences. Again, sometimes the compiler is good about optimizing several references to the same structure, in source like myPtr->field1 = xxx; myPtr->field2 = yyy; SomeFunc (myPtr->field3); or the same with myStruct.myField. Just by changing the phrasing of the C code, we can often squeeze 10-20% on top of level 9/11 optimization out of even fairly tightly written code; more for naively written modules.
I agree with earlier posters that it's worth exploring ways to write C that compiles to a smaller space. On a large scale, you're not going to beat the compiler by writing assembler by hand -- at least, not by any amount that you couldn't have reached by doing similar work on the C source itself. Good assembler can be smaller than bad C, but that's an apples-to-oranges comparison. I disagree, it IS possible to beat the compiler by significant amounts if you know where and how. Willy-nilly changing C to assembler will gain you very little. OK, let us compare good C with good assembler: There are areas in which you can beat the compiler by significant amounts specifically in the area of handling strings. Let us take an example (ok, this one mainly for speed, but fresh in my mind): reading 1 k from one buffer and after processing the entries storing them in another buffer. you happen to know that the move is always in groups of 32 and you have, by using assembler, located your output buffer on a memory location where the start address is xx00h you can the store by what i sketch here
move group: ;*** set output addr in P2val and r0 mov r0,r0val mov p2,P2val gloop: movx a,@dptr inc dptr ;*** process byte in a movx @r0,a inc r0 djnz .... mov a,r0 jnz goon inc p2val
ADDENDUM: I missed the last step in the post above Optimize the C (not by increasing optimizer level, but by writing good code), then find the few places where an assembler rewrite gives significant gains and do that. THEN STOP! the 80/20 rule apply here fully.Then apply the compiler/linker optimization.
I thought I'd add my 2 cents here. First of all, I can't see any contradiction between what Drew and Erik said. What I mean is that if you write a complex program entirely in C you'll most likely end up with smaller code than if you write it in assembly language. Of course, it is possible to make a mess in both of those scenarios, but let's assume that the programmer does a sound job. I think all the points raised in the discussion are good ones. To sum up, the recipe is to write good C code keeping in mind that some C language constructs translate into tighter code than others for your particular architecture. For example, in one of my projects I was writing simple menu handling code. Simply replacing 2D array indices with pointers instantly saved me more than 1K of code memory. Best of luck, - mike
Drew Davis said, "avoid xdata access. It's very expensive to put variables in xdata; it takes a lot of instructions to load up the DPTR and move variables in and out." Very true. See http://www.8052.com/forum/read.phtml?id=104610 However, note once again that I say, "you may be able to significantly reduce your code size..." and, "The benefits of all this do, of course, depend upon the nature of the existing code - YMMV" Assembler is just another language - it is not a magic bullet! The mere fact that you write in Assembler rather than 'C' will not of itself make your code any smaller - it all depends on how skilled you are in using the particular language. As has already been mentioned, it is quite possible that an unskilled assembler programmer's code will be bigger and/or slower than a skilled 'C' programmer's code!