This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Large memcpy

We are using Keil C-51 V7.06.

There is a structure defined:

typedef struct
{
  unsigned char address_bus_h;
  unsigned char address_bus_l;
  unsigned char data_bus;
}instbus_raw_t;

Using three simple lines to copy the structure to another it enlarges code by 9 bytes, but this is not an elegant solution:

instbus_raw.address_bus_h = instbus_raw_local.address_bus_h;
instbus_raw.address_bus_l = instbus_raw_local.address_bus_l;
instbus_raw.data_bus = instbus_raw_local.data_bus;

Using the normal library function memcpy
blows up the code by 300 bytes!

memcpy(&instbus_raw,&instbus_raw_local,sizeof(instbus_raw_t));

Using an own function my_memcpy the code increases by 167 bytes:

void *(my_memcpy)(void *s1, const void *s2, size_t n)
{
  char *su1 = (char *)s1;
  const char *su2 = (const char *)s2;

  for (; 0 < n; --n)
    *su1++ = *su2++;
  return (s1);
}

my_memcpy(&instbus_raw,&instbus_raw_local,sizeof(struct instbus_raw_t));

In a project with a little chip of 2k Flash, 300 bytes for copying few bytes are considerable!

Does anyone remarking same effects with library functions wasting resources?

Regards Peter

Parents
  • In this case, I agree with you. However, most functions are not quite that trivial.

    It's easy to create the perfect optimizing compiler if you guarantee that all function it compiles are small and are not too complex.

    The problem arises when you have functions that are insanely complex. Then, the compiler still must do a good job.

    As it is, the small functions like you demonstrate would be the ones that I would first write in C (to get working) and later go back in write in assembly (if needed).

    Jon

Reply
  • In this case, I agree with you. However, most functions are not quite that trivial.

    It's easy to create the perfect optimizing compiler if you guarantee that all function it compiles are small and are not too complex.

    The problem arises when you have functions that are insanely complex. Then, the compiler still must do a good job.

    As it is, the small functions like you demonstrate would be the ones that I would first write in C (to get working) and later go back in write in assembly (if needed).

    Jon

Children
  • It's easy to create the perfect optimizing compiler if you guarantee that all function it compiles are small and are not too complex.

    I'm sure a lot of users would appreciate a compiler command line option called "perform near-perfect optimization on simple functions". If it's easy, why not do that?
    I seem to remember that the OpenWatcom compiler even allows the user to specify the amount of virtual memory to use in optimization. Basically the amount of available memory pretty much determines how good a job the compiler does at optimizing complex functions.
    Ah, well...

    - mike