We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
My platform is based on the M0 processor. A simple(probably non-optimal) code I use to convert my input to a scaled output is as follows:
uint32_t output = (uint32_t)(12000 * ((1023/(float)input) - 1));
Is there a way to achieve this result in a much optimal way, since M0 doesn't have a fpu. For example, is the following optimization valid(given I'm okay with the slight loss of precision):
uint32_t tmp = (1023 * 1000/input - 1000);uint32_t output = (uint32_t)(12 * tmp);