This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

error messages versus opt level

I have noticed that I get error messages or I don't sometimes depending on optimization level. For example, I had a statement that tried to store a value into code space and at opt level zero, I got an error saying that it couldn't convert the lvalue. I understood this. However, at opt level 8, I got no error and in fact the 'optimizer' simply left out the offending line. Why should it do this? If it is an error, it should report it and not throw the code away.

  • If the line of code did nothing, the compiler, any optimizing compiler will remove it. I leave opimization on the highest level at all times.

  • If a line of code does nothing then certainly it would be optimized out. However, my big concern is that it was supposed to do something but there was an error in the line. I feel that the compiler should have reported the error and not simply removed the line without saying anything. I could have spent a looooong time debugging, wondering why that particular function didn't work properly.

  • "If a line of code does nothing then certainly it would be optimized out. However, my big concern is that it was supposed to do something..."

    The compilation process is not a simple once-through operation, taking source in and spitting object out.
    The compiler goes through a number of phases; presumably the phase at which the optimisation decided to delete your line came before the stage which would've detected the error?!

    eg, i = f(x) could be perfectly syntactically correct; the fact that i happens to be in CODE space and, therefore, unwritable might not be detectable til really quite late in the code-generation process.

  • The statement in question that started me down this path is:
    *((Byte *)line.pointer)=active_thread;
    Hopefully, I didn't type in any new syntax errors.
    Anyway, the pointer element is declared as
    void code *pointer;
    so yeah it tries to store the value of active_thread into a code location. However, my question still stands. Why should the compiler simply throw it out without flagging me? Even if the process is involved, errors should be detected and reported. Seems like a bug in the compiler to me.

  • Your syntax is fine, so the compiler's syntax checking must give no errors.

    The error lies only in the fact that you have tried to write to a non-writable memory area; therefore, if the optimiser has removed that attempted write, there is nothing wrong with your code!

  • We tried to duplicate your problem, but no matter which optimization level or compiler version we are using we are getting a correct error message. You your case must be different from ours. How can we duplicate it?

    Our test file is as follows:

    stmt level    source
    
       1          struct  {
       2            unsigned char      v;
       3            unsigned char code *ptr;
       4          } s;
       5          
       6          void main (unsigned char t)  {
       7   1        *((unsigned char * )s.ptr) = t;
       8   1      }
    *** ERROR C183 IN LINE 7 OF Y.C: unmodifiable lvalue
    

  • Well,I am puzzled. I tried your simple program and got the error on both opt level 0 and opt level 8. I am running v2.20a by the way. So, I tried making it a little bit more like our real program. This is what I did.
    #define BYTE int
    struct {
    unsigned char v;
    void code *ptr;
    } s;
    void chooseOne(BYTE a)
    {
    int i;
    switch(a)
    {
    case 1:
    i=1;
    break;
    case 2:
    i=2;
    break;
    case 3:
    *((BYTE * )s.ptr) = a;
    break;
    case 4:
    i=4;
    break;
    default:
    break;
    }
    }


    void main (void)
    {
    int a=3;
    chooseOne(a);
    }
    I set up the target options the same as our real program. I got the same error at both opt levels. I don't know why I don't get the error with opt level 8 with our actual code. I have asked my colleague who originally wrote it to try to look into it. At least he was able to reproduce my results on his computer so it is not just with my machine.

  • #define BYTE int

    Just a note, the above is quite disgusting and misleading. This is exactly what typedef's are for and int's on many platforms are more than a BYTE (PIC excepted). What is wrong with
    typedef int TwoBytes;
    If you need to see if the typedef is "defined" you can still simply define something like:

    typedef int TwoBytes;
    #define TWO_BYTE_TYPE
    I know I should just keep my mouth shut, I'm sorry for butting in but this mis-use of a #define made me spill my beer - another crime.

    - Mark

  • Sorry, but we need an example that allows us to duplicate your initial problem. If you have a more complex example, you may want to send this to:

    support.intl@keil.com