We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
This is not keil question. General query about adc calibration.
1. I have to measure voltage in 8/16 bit MCU & have to do its calibration also. I have selected 2 point calibration as it corrects both gain & offset error.
2. Voltage to measure = 0-5V. Let measured at two pint: Vm1 & Vm2 Let actual volatge at those point = Vo1 & Vo2
Gain error = (Vm2-Vm1)/ (Vo1-Vo2) offset error = Vm1 - (gain_error * Vo1)
reading = (Vm - offset_error)/gain_error;
3. Problem is this involves complex math & generally goes to float. Now using float operation on 8/16 bit MCU is very intensive.
Queries: 1. Is there any better method for which I don't have to do such float calculation. If yes any example code?
2. If no to 1, can above method be made less operation intensive like by fixed point math. If yes any example code
You can use 32-bit integers and do fixed-point arithmetic.
Scale the ADC values by 1000 and you suddenly have 3 extra decimals to play with in the computation.
You want to do a A * x + B and then remove the extra decimals you created.
More info on that here
en.wikipedia.org/.../Fixed-point_arithmetic
my.st.com/.../Flat.aspx