• optimize scaling that involves float division in M0

    My platform is based on the M0 processor. A simple(probably non-optimal) code I use to convert my input to a scaled output is as follows:

    uint32_t output = (uint32_t)(12000 * ((1023/(float)input) - 1));

    Is there a way to achieve this result in a much…