My platform is based on the M0 processor. A simple(probably non-optimal) code I use to convert my input to a scaled output is as follows:
uint32_t output = (uint32_t)(12000 * ((1023/(float)input) - 1));
Is there a way to achieve this result in a much optimal way, since M0 doesn't have a fpu. For example, is the following optimization valid(given I'm okay with the slight loss of precision):
uint32_t tmp = (1023 * 1000/input - 1000);uint32_t output = (uint32_t)(12 * tmp);
output = (12000u*1023u)/input - 12000u;
It's as simple as that. uint32_t is easily big enough to hold 12000*1023.