optimize scaling that involves float division in M0

My platform is based on the M0 processor. A simple(probably non-optimal) code I use to convert my input to a scaled output is as follows:

uint32_t output = (uint32_t)(12000 * ((1023/(float)input) - 1));

Is there a way to achieve this result in a much optimal way, since M0 doesn't have a fpu. For example, is the following optimization valid(given I'm okay with the slight loss of precision):

uint32_t tmp = (1023 * 1000/input - 1000);
uint32_t output = (uint32_t)(12 * tmp);

Parents Reply Children
No data