This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

bit setting, how to

Hallo!
I'm searching for a explanation for the code in Keil's CAN.h
(01=set; 10=reset)

<-------------code------------->
...
#define NEWDAT_ 0x0200u
#define CPUUPD_ 0x0800u
#define TXRQ_ 0x2000u
...
#define NEWDAT_CLR (~NEWDAT_)
#define CPUUPD_CLR (~CPUUPD_)
#define TXRQ_CLR (~TXRQ_)
...
#define NEWDAT_SET (~(NEWDAT_ >> 1))
#define CPUUPD_SET (~(CPUUPD_ >> 1))
#define TXRQ_SET (~(TXRQ_ >> 1))
...

For example, this statement clears (resets) the CPUUPD flag and sets the NEWDAT and TXRQ flags, while leaving all the other flags unchanged:

CAN_MSGOBJ[can_object_number].msg_ctl
= CPUUPD_CLR & NEWDAT_SET & TXRQ_SET;
...
<-------------/code------------->

(How) does it work (leaving all the other flags unchanged)? Has it something to do with the u or what does the u stand for?

I know bit manipulation/masking in the way
bitfield = bitfield | (0x2); //set bitfield[1]
bitfield = bitfield & ~(0x2); //clr bitfield[1]

Thanks Sven.

0