This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Bitwise logical AND stores in a bit ....

Hi,

I get a curious result when compiling such a following code :

typedef union  {

  unsigned char                cCtrlFullByte;

  struct {
    unsigned char  bEnable          : 1;
    unsigned char  cUnused          : 7;

  }  cCtrlStruct;
} CtrlUnion;

void main (void)  {

    unsigned char  dummy = 0x55;
    CtrlUnion  xdata    bitUnion;

    bitUnion.cCtrlStruct.bEnable = dummy & 0x40;

    return;
}


It results in :
MOV A,#0x55
ANL A,#0x00
MOV R7,A
MOV DPTR, #0x0000
MOVX A,@DPTR
ANL A,#0xFE
ORL A,R7
MOVX @DPTR, A 

I thought that the bit result of bitwise logical AND is 1 if result is not 0, else 0.
It seems that I didn't understand ANSI the same way than Keil compiler ? Am I wrong ?

Arnaud DELEULE

Parents
  • I guess, it can be understood like this:

    bit and unsigned char:1 are DIFFERENT types:

    bit can be seen similar to the bool type in C++: 0 is false, anything else is true.

    Integers of 1 bit size are not boolean. Rather do they still remain integers restricted to the range {0,1}. They are "arithmetical", bits on the other hand are "logical".

    A numerical value casted to bit therefore must yield 1 (i.e. true) if it would succeed as a proposition in a control statement, otherwise it must yield 0 (i.e. false).

    A numerical value casted to an integer with less size (e.g 1) however must be stripped off the "excessive" bits in its binary representation, leaving possibly only the LSB. So these propositions hold:

    (bit)2 == 1;      /*logical use */
    (uchar:1)2 == 0; /* arithmetic use (this isn't C-Syntax, I know)*/
    
    This explains the difference, but to me too this is an improper mix of types and memory locations, because if bit (if seen as synonym for bool) is a type it should be allowed to reside anywhere.

    This:
    bit xdata externalBit;  /*why not?*/
    
    should be allowed.

    Norbert

Reply
  • I guess, it can be understood like this:

    bit and unsigned char:1 are DIFFERENT types:

    bit can be seen similar to the bool type in C++: 0 is false, anything else is true.

    Integers of 1 bit size are not boolean. Rather do they still remain integers restricted to the range {0,1}. They are "arithmetical", bits on the other hand are "logical".

    A numerical value casted to bit therefore must yield 1 (i.e. true) if it would succeed as a proposition in a control statement, otherwise it must yield 0 (i.e. false).

    A numerical value casted to an integer with less size (e.g 1) however must be stripped off the "excessive" bits in its binary representation, leaving possibly only the LSB. So these propositions hold:

    (bit)2 == 1;      /*logical use */
    (uchar:1)2 == 0; /* arithmetic use (this isn't C-Syntax, I know)*/
    
    This explains the difference, but to me too this is an improper mix of types and memory locations, because if bit (if seen as synonym for bool) is a type it should be allowed to reside anywhere.

    This:
    bit xdata externalBit;  /*why not?*/
    
    should be allowed.

    Norbert

Children
  • Arnaud, the compiler is generating correct code. You are casting from an 8-bit unsigned char to a 1-bit unsigned char. The rule when casting from a large unsigned type to a smaller unsigned type of variable is simply to discard the most significant bits. You will find that the same sort of thing happens when you cast from an unsigned int to an unsigned char – the 8 most significant bits of the unsigned int are discarded.

    Norbert is quite correct to say that bit memory and and a 1-bit bit field are not the same thing. In C you have to be vary careful with Booleans – only logical operators and some functions return truly Boolean results.

    However:

    bit xdata externalBit;  /*why not?*/
    
    cannot be allowed because bit is a memory type not a variable type. In fact, C51 does not allow variable stored in bit memory to have a type!

    So, when you compile this:
    typedef struct  {
        unsigned char  bEnable          : 1;
        unsigned char  cUnused          : 7;
    } BitFieldStruct;
    
    unsigned char   dummy = 0x55;
    BitFieldStruct  bdata bitUnion;
    
    void main (void)  {
        bitUnion.bEnable = dummy & 0x74;
        return;
    }
    
    You get this:
    0000 900000      R     MOV     DPTR,#dummy
    0003 E0                MOVX    A,@DPTR
    0004 5400              ANL     A,#00H
    0006 FF                MOV     R7,A
    0007 E500        R     MOV     A,bitUnion
    0009 54FE              ANL     A,#0FEH
    000B 4F                ORL     A,R7
    000C F500        R     MOV     bitUnion,A
    000E         ?C0085:
    000E 22                RET     
    
    So, why is A anded with #00H and not 74H – that will be the compiler predicting that you are not interested in the 7 most significant bits. Note the effect of a small change to your code:
    ….
    unsigned char   dummy = 0x55;
    BitFieldStruct  bdata bitUnion;
    
    void main (void)  {
        bitUnion.bEnable = dummy & 0x71;
        return;
    }
    
    Will give you this:
    0000 900000      R     MOV     DPTR,#dummy
    0003 E0                MOVX    A,@DPTR
    0004 5401              ANL     A,#01H
    0006 FF                MOV     R7,A
    0007 E500        R     MOV     A,bitUnion
    0009 54FE              ANL     A,#0FEH
    000B 4F                ORL     A,R7
    000C F500        R     MOV     bitUnion,A
    000E         ?C0085:
    000E 22                RET     
    
    That is, bitUnion.bEnable is assigned the value of the least significant bit of dummy.

    If that is not what you wanted, it is because you are not coding correctly. What you actually want is something like this:
    typedef struct  {
        boolean          bEnable          : 1;
        unsigned char  cUnused          : 7;
    } BitFieldStruct;
    
    unsigned char   dummy = 0x55;
    BitFieldStruct  bdata bitUnion;
    
    void main (void)  {
        bitUnion.bEnable = ( dummy & 0x74 ) != 0;
        return;
    }
    
    Which generates this:
    0000 900000      R     MOV     DPTR,#dummy
    0003 E0                MOVX    A,@DPTR
    0004 5474              ANL     A,#074H
    0006 6004              JZ      ?C0085
    0008 7F01              MOV     R7,#01H
    000A 8002              SJMP    ?C0086
    000C         ?C0085:
    000C 7F00              MOV     R7,#00H
    000E         ?C0086:
    000E EF                MOV     A,R7
    000F 5401              ANL     A,#01H
    0011 FF                MOV     R7,A
    0012 E500        R     MOV     A,bitUnion
    0014 54FE              ANL     A,#0FEH
    0016 4F                ORL     A,R7
    0017 F500        R     MOV     bitUnion,A
    0019         ?C0087:
    0019 22                RET     
    
    Which is the compiler's rather long-winded way of doing this:
    0000 900000      R     MOV     DPTR,#dummy
    0003 E0                MOVX    A,@DPTR
    0004 5474              ANL     A,#074H
    0006 6004              ADD	A,#FFH
    0012 E500        R     MOV     A,bitUnion
    0014 54FE              MOV	Acc.0,C
    0017 F500        R     MOV     bitUnion,A
    0019         ?C0087:
    0019 22

  • OK, I understand the thought process
    but I guess that very few people do 1-bit arithmetic.
    But that the way Keil choose and it is one solution.
    Not the way for me, but that's the life ! ;-)

    Thanks all for your help and explanations

    Arnaud