This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

typedef & optimisation

In the book "TCP/IP Lean" [1], the author states:

"I have used #define in preference to typedef because compilers use better optimisation strategies for their native data types."

Is this true of the Keil C51 compiler?

ie, will C51 generate better-optimised code from source using

#define U8 unsigned char
than from source using
typedef unsigned char U8

[1] Bentham, J.
"TCP/IP Lean"
CMP Books, 2000
ISBN 1-929629-11-7
http://www.iosoft.co.uk/tcplean.htm

Parents
  • Thanks, Jon. This is consistent with my limited knowledge of compiler design. The compiler knows how to deal with objects and expressions having particular characteristics, with size and "type" being among those characteristics. It should not make any difference whether a compiler learns about an object having one of the "native" types directly through a macro expansion or by "looking back" to resolve a typedef. Two objects having the same "root type" should be treated the same.

    Of course, I could be wrong entirely. ;-)

    --Dan Henry

Reply
  • Thanks, Jon. This is consistent with my limited knowledge of compiler design. The compiler knows how to deal with objects and expressions having particular characteristics, with size and "type" being among those characteristics. It should not make any difference whether a compiler learns about an object having one of the "native" types directly through a macro expansion or by "looking back" to resolve a typedef. Two objects having the same "root type" should be treated the same.

    Of course, I could be wrong entirely. ;-)

    --Dan Henry

Children
No data