We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
I did the configuration for the ADC and use a 2V voltage supply as an analog input. But the LCD shows nothing. The board im using is C8051F206. The integrated ADC is 12bit. Also, Im using Port 1.7 as the analog input.
MOV AMX0SL,#2FH ; Selects P1.7 as the input MOV ADC0CF,#000H ; 1 system clock and 1 gain MOV ADC0CN,#0C1H MOV ADC0L, #000H ; ADC Data Word Register MOV ADC0H, #000H ; ADC Data Word Register MOV ADC0LTH, #000H ; ADC Less-Than High Byte Register MOV ADC0LTL, #000H ; ADC Less-Than Low Byte Register MOV ADC0GTH, #0FFH ; ADC Greater-Than High Byte Reg MOV ADC0GTL, #0FFH ; ADC Greater-Than Low Byte Reg
CONVERT: SETB ADBUSY ; starts conversion LCALL DELAY POLL: JB ADCINT,PRINT ;Poll to see whether conversio is done SJMP POLL PRINT: CLR ADCINT CLR RS MOV DAT,#0FH ; On the LCD SETB EN LCALL DELAY CLR EN MOV A,ADC0H LCALL WRITE_TEXT RET
Erm. Sorry. Pls do not care about what i wrote above, thats not what i wanted to ask. I have read about the datasheet. But I do not understand about most parts of the ADC. Can someone please explain it to me.
What i wanted to ask is, what does SAR clock means and must i satisfy the timing for the external trigger source?
"ADC0CF": This definition have 2 parts (Clock period bits and Internal Amplifier Gain). What does this 2 mean? The Internal Amplifier Gain means that whatever analog input i use, it will amplify the signal and output it? What if i use a voltage source as an analog input?
About ADLJST, what does it mean by "Data in ADCOH:ADC0L registers are right/left justified"?
Also, about ADWINT, does it just enable the comparsion function or it have other function?
This is the link for the datasheet:
www.silabs.com/.../C8051F2xx.pdf
what does SAR clock means Successive Approximation Register clock
must i satisfy the timing for the external trigger source? OF COURSE.
The Internal Amplifier Gain means that whatever analog input i use, it will amplify the signal and output it? NO, what it does is it internally divides the reference voltage (which has the same effect on the output as amplification), so for twice the 'gain' you get twice the noise.
What if i use a voltage source as an analog input? ????
Erik
I mean if i use a 2V as a analog input and wants to print out the value in binary on a LCD, what should i do?
multiply/divide/adjusr what comes out of the ADC to make it a voltage value then convert it to ASCII and display it
I see. So i configured the SAR clock to 16 system clocks and the gain to "1"?
Also, what does left/right justified mean and does?
ADWINT: ADC Window Compare Interrupt Flag 0: ADC Window Comparison Data match has not occurred 1: ADC Window Comparison Data match occurred
What does the above sentence does? It compares the input with the reference?
It is said that the output is at "ADC0H", so do i have to configure this bit at the start of the program?
If im using a 25Mhz processor, I have to configure the SAR clock to 16 system clock? So that i will get less than 2Mhz as noted on the datasheet.
The "ADWINT", i have set it to 0.
Do you have any websites where i can find out how to convert the data to ASCII in assembly language? I have googled and yahooed for it, but mostly in C.
The LCD printed "h" as 1.32V.
How do i make it such that when i input a "2.5V", the LCD will also display "2.5V"?
How do i display floating point in assembly language?
You don't do floating point. You use fixed point integers.
Take your integer value from the ADC. Multiply with a constant (beware of numeric overflow) and divide by another constant.
If suitable constants are used, the integer will represent volt*10 or volt*100 or volt*1000.
Then emit volt/1000 as the integer part and volt&1000 as the millivolt part.
Oops. Should be volt%1000 for the millivolt part.
"Take your integer value from the ADC. Multiply with a constant (beware of numeric overflow) and divide by another constant."
The input im using is a voltage input. what do you mean by "integer value" and "beware of numeric overflow"?
"Then emit volt/1000 as the integer part and volt&1000 as the millivolt part."
The "emit" mean display it on the LCD?
If the ADC is 10 bit wide (0..1023), max range is 2.5V then each step on the ADC will correspond to about 0.00244V. Note that some ADC will let 1023 represent 2.5V, and some ADC will let 1024 represent 2.5V - i.e. some ADC will only be able to measure one resolution step lower than the reference voltage.
If you apply 2.27V on the input, the ADC will report this as an integer value of about 930, since the value is about 930/1024 of full range.
To convert the integer you get from the ADC to V or mV, you must multiply the value with a scale factor.
Note that if you multiply 930 (the measured value) with 0.00244 you get about 2.27 - the expected voltage.
However, you do not want to multiply with a floating point value. Instead, you want to use fractional integers. If you move the decimal point on 0.00244 five steps right and multiply 930 with 244 instead, you get the result 226920 - this represent 0.01mV. However, that is too large for a 16 bit number.
If you can't immediately multiply and get a usable result, you have to try a multiply followed by a division, or find a more optimal multiplication factor.
Since 0.00244 * 100000 = 244, and 244 is an even number, you can divide by two and get 122. Also even, so you can divide by two and get 61.
1024*61 will fit in a 16-bit integer.
The measure 930 * 61 is 56730. That is a value that is 4 times too small, but we don't have room to multiply yet. Throw away the last decimal with optional rounding.
Then you have 5673. Multiply with 4 to get 22692. Since you started by moving the decimal point right 5 steps, and later threw away one digit, you now have a value that is 10000 times to large. Throw away one digit more (with possible rounding to get) 2269. Now, the decimal point has been moved 3 steps, so you have your value in millivolt. Insert a decimal point before the three last digits - that is your voltage.
It is also possible to use successive approximation when converting from the ADC value to a voltage. Each bit in the ADC reading has a value twice as large as the previous bit.
There are a lot of other methods too. Often, associativity is used to evaluate the integer and fractional parts separately, to keep down on needed number if bits in the evaluation. This avoids problems with numeric overflow of 16-bit numbers.
"If the ADC is 10 bit wide (0..1023), max range is 2.5V "
I know my ADC is 12-bit, but how do you know whether its from 0-1023 or 0-1024?
The datasheet states that the voltage conversion range is between 0-Vref in which Vref is Vdd = 3V.
I know how u get 1024, but for 12-bits ADC, whats the range?
Also from the datasheet, does that mean my max input voltage is 3V?
Is the range for 12-bits ADC is 4096?
The max range is 3V. Each step corresponds to 0.000732?
I moved 6 steps right and get 732, and since it is an even number, i divided it by 2 for 2 times and get 183. If i divide by 2 again, it will give me a decimal value. But 183 will not fit into a 16-bit integer.
What should i do?
Can i just use the converted data in the ADC and convert it to ASCII, and print it out in the LCD?
I have tried it, but theres a difference in the values. i.e: 1.79V input = 1.42V on LCD, 1.00V input = 0.79V on LCD.
The larger the input, the greater the difference.
For a 12-bit ADC, you don't have to care if the ADC defines the reference voltage to be at 2^12 or 2^12-1. The error is way less that the actual precision. You don't have a voltage reference with an error in the neighbourhood of Vref/4096. It's mostly important when using 8-bit ADC, since one step represents 0.4% on them.
If you can afford the use of the long data type, then you don't have to worry about overflow in the multiplication.