This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Arm GCC lambda optimization

Hello,

I am working on an IoT project, mixing C and C++, and I am having stack issues with lambdas.

The following code was compiled by gcc-arm-none-eabi-8-2018-q4-major-win32, with -Os and runs on a NUCLEO-L476RG. I monitored stack usage with Ozone.

typedef struct structTest
{
    uint32_t var1;
    uint32_t var2;
} structTest;

// Test 1
int main()
{
    dostuff( [&]() -> structTest{ structTest $; $.var1 = 0; $.var2 = 0; $.var2 = 24; $.var1 = 48; return $; }() );
}

// Test 2
int main()
{
    dostuff( [&]() -> structTest{ structTest $; $.var1 = 0; $.var1 = 0; $.var1 = 48; return $; }() );

    dostuff( [&]() -> structTest{ structTest $; $.var1 = 0; $.var1 = 0; $.var2 = 13; $.var1 = 42; return $; }() );
}

We have some complex macros that enables use to make sure structures are used initialized, and those macros generated some code similar to the above one. "structTest $; $.var1 = 0; $.var2 = 0;" is always generated, and after the macros add the users values to the corresponding fields.

The expected behavior in case 1 and 2 was that only 8 bytes of stack were used for data. This is the case in Test 1, but it is 16 bytes for test 2.

Is there any way to keep this kind of structure but to force the compiler to reuse the stack ? -fconserve-stack and -fstack-reuse=all both had no effect.

I also can't find documentation on the optimization behavior expected for lambda functions, if anyone has a link I'll be gratefull

Parents Reply Children