This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Assembly vs C

Hi,
We have one application which was developed in c51. By applying all aptimization techniqs provided by compiler, code size is near to 20k. If we change total code to Assembly from C, how much code size can be reduced?.
Please provide me ur inputs....

Ramesh

Parents
  • I agree with earlier posters that it's worth exploring ways to write C that compiles to a smaller space. On a large scale, you're not going to beat the compiler by writing assembler by hand -- at least, not by any amount that you couldn't have reached by doing similar work on the C source itself. Good assembler can be smaller than bad C, but that's an apples-to-oranges comparison.

    I disagree, it IS possible to beat the compiler by significant amounts if you know where and how. Willy-nilly changing C to assembler will gain you very little.

    OK, let us compare good C with good assembler:
    There are areas in which you can beat the compiler by significant amounts specifically in the area of handling strings.

    Let us take an example (ok, this one mainly for speed, but fresh in my mind):
    reading 1 k from one buffer and after processing the entries storing them in another buffer.

    you happen to know that the move is always in groups of 32 and you have, by using assembler, located your output buffer on a memory location where the start address is xx00h

    you can the store by what i sketch here

    move group:
    ;*** set output addr in P2val and r0
           mov  r0,r0val
           mov  p2,P2val
    gloop: movx a,@dptr
           inc dptr
    ;*** process byte in a
           movx @r0,a
           inc  r0
           djnz ....
           mov a,r0
           jnz goon
           inc p2val
    This is about 1/3 in codespace and time of the equivalent c.

    so, what is the base for the savings? combining something you know (groups of 32) with something using that knowledge (buffer address is xx00h) because your familiarity with the hardware (P2 need only opdate when r0 cross zero) and by doing so you do something based on something there are no means of "telling" the compiler.

    AGREED, Where it is possible for the compiler to do the optimum based on things you (can) tell it, the savings are very nominal.

    Taking a program at random and "deciding" to cut its codespece by 25% by switching to assembler is, in most, if not all cases, a losing proposition.

    Optimize the C (not by increasing optimizer level, but by writing good code), then find the few places where an assembler rewrite gives significant gains and do that. THEN STOP! the 80/20 rule apply here fully.

    Erik

Reply
  • I agree with earlier posters that it's worth exploring ways to write C that compiles to a smaller space. On a large scale, you're not going to beat the compiler by writing assembler by hand -- at least, not by any amount that you couldn't have reached by doing similar work on the C source itself. Good assembler can be smaller than bad C, but that's an apples-to-oranges comparison.

    I disagree, it IS possible to beat the compiler by significant amounts if you know where and how. Willy-nilly changing C to assembler will gain you very little.

    OK, let us compare good C with good assembler:
    There are areas in which you can beat the compiler by significant amounts specifically in the area of handling strings.

    Let us take an example (ok, this one mainly for speed, but fresh in my mind):
    reading 1 k from one buffer and after processing the entries storing them in another buffer.

    you happen to know that the move is always in groups of 32 and you have, by using assembler, located your output buffer on a memory location where the start address is xx00h

    you can the store by what i sketch here

    move group:
    ;*** set output addr in P2val and r0
           mov  r0,r0val
           mov  p2,P2val
    gloop: movx a,@dptr
           inc dptr
    ;*** process byte in a
           movx @r0,a
           inc  r0
           djnz ....
           mov a,r0
           jnz goon
           inc p2val
    This is about 1/3 in codespace and time of the equivalent c.

    so, what is the base for the savings? combining something you know (groups of 32) with something using that knowledge (buffer address is xx00h) because your familiarity with the hardware (P2 need only opdate when r0 cross zero) and by doing so you do something based on something there are no means of "telling" the compiler.

    AGREED, Where it is possible for the compiler to do the optimum based on things you (can) tell it, the savings are very nominal.

    Taking a program at random and "deciding" to cut its codespece by 25% by switching to assembler is, in most, if not all cases, a losing proposition.

    Optimize the C (not by increasing optimizer level, but by writing good code), then find the few places where an assembler rewrite gives significant gains and do that. THEN STOP! the 80/20 rule apply here fully.

    Erik

Children
  • I thought I'd add my 2 cents here.
    First of all, I can't see any contradiction between what Drew and Erik said. What I mean is that if you write a complex program entirely in C you'll most likely end up with smaller code than if you write it in assembly language. Of course, it is possible to make a mess in both of those scenarios, but let's assume that the programmer does a sound job.
    I think all the points raised in the discussion are good ones. To sum up, the recipe is to write good C code keeping in mind that some C language constructs translate into tighter code than others for your particular architecture. For example, in one of my projects I was writing simple menu handling code. Simply replacing 2D array indices with pointers instantly saved me more than 1K of code memory.

    Best of luck,
    - mike