We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
We are using Keil C-51 V7.06. There is a structure defined:
typedef struct { unsigned char address_bus_h; unsigned char address_bus_l; unsigned char data_bus; }instbus_raw_t;
instbus_raw.address_bus_h = instbus_raw_local.address_bus_h; instbus_raw.address_bus_l = instbus_raw_local.address_bus_l; instbus_raw.data_bus = instbus_raw_local.data_bus;
memcpy(&instbus_raw,&instbus_raw_local,sizeof(instbus_raw_t));
void *(my_memcpy)(void *s1, const void *s2, size_t n) { char *su1 = (char *)s1; const char *su2 = (const char *)s2; for (; 0 < n; --n) *su1++ = *su2++; return (s1); } my_memcpy(&instbus_raw,&instbus_raw_local,sizeof(struct instbus_raw_t));
In this case, I agree with you. However, most functions are not quite that trivial. It's easy to create the perfect optimizing compiler if you guarantee that all function it compiles are small and are not too complex. The problem arises when you have functions that are insanely complex. Then, the compiler still must do a good job. As it is, the small functions like you demonstrate would be the ones that I would first write in C (to get working) and later go back in write in assembly (if needed). Jon
It's easy to create the perfect optimizing compiler if you guarantee that all function it compiles are small and are not too complex. I'm sure a lot of users would appreciate a compiler command line option called "perform near-perfect optimization on simple functions". If it's easy, why not do that? I seem to remember that the OpenWatcom compiler even allows the user to specify the amount of virtual memory to use in optimization. Basically the amount of available memory pretty much determines how good a job the compiler does at optimizing complex functions. Ah, well... - mike