We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
We are using Keil C-51 V7.06. There is a structure defined:
typedef struct { unsigned char address_bus_h; unsigned char address_bus_l; unsigned char data_bus; }instbus_raw_t;
instbus_raw.address_bus_h = instbus_raw_local.address_bus_h; instbus_raw.address_bus_l = instbus_raw_local.address_bus_l; instbus_raw.data_bus = instbus_raw_local.data_bus;
memcpy(&instbus_raw,&instbus_raw_local,sizeof(instbus_raw_t));
void *(my_memcpy)(void *s1, const void *s2, size_t n) { char *su1 = (char *)s1; const char *su2 = (const char *)s2; for (; 0 < n; --n) *su1++ = *su2++; return (s1); } my_memcpy(&instbus_raw,&instbus_raw_local,sizeof(struct instbus_raw_t));
Is it that Keil's compilers have not caught up with the latest and greatest in compiler technology? Can you give me an example (manufacturer and version) of a compiler that is the latest and greated in technology. That way, I can let you know if we've caught up with them. Jon
Yes, I agree, statements like these have to be supported by facts. I use the C166 compiler, and there are not too many compilers for that architecture. And from what I heard Keil's C166 is the best available. But what I meant was that so many times when I look at the code generated by C166 I can't help but notice so obvious optimizations not performed by the compiler. Let's look at a real-world example:
#include <intrins.h> long l[2]; long read_long_atomically(int i) { long tmp; long *ptr; ptr = &l[i]; _atomic_(0); tmp = *ptr; _endatomic_(); return tmp; } Compiler listing: MOV R5,R8 SHL R5,#02H MOV R4,#l ADD R4,R5 ATOMIC #02H MOV R6,[R4] MOV R7,[R4+#02H] MOV R4,R6 MOV R5,R7 RET
SHL R8,#4 ADD R8,#l ATOMIC #02H MOV R4,[R8] MOV R5,[R8+#2] RET
In this case, I agree with you. However, most functions are not quite that trivial. It's easy to create the perfect optimizing compiler if you guarantee that all function it compiles are small and are not too complex. The problem arises when you have functions that are insanely complex. Then, the compiler still must do a good job. As it is, the small functions like you demonstrate would be the ones that I would first write in C (to get working) and later go back in write in assembly (if needed). Jon
It's easy to create the perfect optimizing compiler if you guarantee that all function it compiles are small and are not too complex. I'm sure a lot of users would appreciate a compiler command line option called "perform near-perfect optimization on simple functions". If it's easy, why not do that? I seem to remember that the OpenWatcom compiler even allows the user to specify the amount of virtual memory to use in optimization. Basically the amount of available memory pretty much determines how good a job the compiler does at optimizing complex functions. Ah, well... - mike