Hello,
1. What is this different style of value assigning? What is the use?
Unsigned char const event_strings[11] = { ~0x3f, ~0x06, ~0x5b, ~0x4f, ~0x66, ~0x6d, ~0x7d, ~0x07, ~0x7f, ~0x6f, ~0x00};
2. What are the drawbacks / overheads on the controller if we use volatile type? Can we assign volatile type to structure members and bdata variables as shown below:
Eg 1:
typedef struct dtmf_scan_block{ unsigned char state; unsigned char user; unsigned char s_r; unsigned char r_id; }DTMF_SCAN_BLOCK; DTMF_SCAN_BLOCK volatile xdata dtmf[NO_DTMF]; Eg2: unsigned char volatile bdata dtvar; sbit dt0 = dtvar^0; sbit dt1 = dtvar^1; sbit dt2 = dtvar^2; sbit dt3 = dtvar^3; Eg3: typedef struct extention_data { unsigned char volatile dgt_buf[40]; unsigned char volatile how[16]; unsigned char call_privacy[3]; unsigned char id; }EXTN_DATA;
3. I am comparing bit type variable with unsigned char variable at many places in my code. Should I follow typecasting?
4. Is it necessary to turn off compiler optimizations {Level 9} for hardware driver initialization code in my project? I am initializing drivers for switch array MT8816, GM16C550 - UART, RTC - DS1380, DTMF IC MT8888,etc
please advise.
1. Presumably someone thought it would be more readable to write the expressions as "negation of X" rather than flip the bits mentally. Apparently a comment would have been even more readable.
3. The compiler should promote bits automatically for the comparison. But, test this code thoroughly. I can remember a couple of recent bugs in the 7.x - 8.x versions of the compiler when it came to comparing bits to chars.
Usually you test bit variables directly. Rather than writing "if (mybit == TRUE)", which requires the compiler to promote the bit to a byte (at least) and do a full compare, you could just write "if (mybit)". Since a single bit can only be 1 or 0, there aren't any other byte values with which to compare.
4. Why do you think you need to turn off compiler optimization? Proper use of the volatile keyword should take care of the problem. Hardware registers should in general be declared volatile, as they can change on their own without the compiler's knowledge.
Rather than writing "if (mybit == TRUE)", which requires the compiler to promote the bit to a byte (at least)
I suspect that as 'bit' is a non-standard type the compiler is allowed to roll its own behaviour.
I found the following interesting for a variety of reasons (8.02, optimisation level 0):
----- FUNCTION main (BEGIN) ----- FILE: 'bar.c' 10: void main(void) 11: { 12: bit a; 13: 14: 15: if(a) 00000F 300002 JNB a,?C0001?BAR 16: { 17: Foo(); 000012 1126 ACALL Foo 18: } 000014 ?C0001?BAR: 19: 20: if(a==1) 000014 300002 JNB a,?C0002?BAR 21: { 22: Foo(); 000017 1126 ACALL Foo 23: } 000019 ?C0002?BAR: 24: 25: if(a==2) 000019 300002 JNB a,?C0003?BAR 26: { 27: Foo(); 00001C 1126 ACALL Foo 28: } 00001E ?C0003?BAR: 29: 30: if(a==93845938) 00001E 300002 JNB a,?C0005?BAR 31: { 32: Foo(); 000021 1126 ACALL Foo 33: } 000023 ?C0004?BAR: 000023 ?C0005?BAR: 34: 35: while(1); 000023 80FE SJMP ?C0005?BAR 36: } 000025 22 RET ----- FUNCTION main (END) -------
Sorry about the BOLD, don't know what went wrong there.
True enough, though a sense of consistency might compel the compiler authors to treat a bit like other integers, or perhaps a bitfield.
But considering the instruction set of the 8051, I'm not sure how you'd compare a bit to a byte without either promoting the bit, or testing the bit and then the byte. Either sequence will take several instructions (unless there's a neat trick I'm overlooking). if (bit) compiles directly to JB/JNB.