How does the 32 bit integer multiply routine implement the multiplication. If you perform a 32bit by 32bit integer multiply and place the result in a 32 bit integer does the routine keep an internal 64-bit representation of the product and then just present the lower 32 bits at the end? Same question for 16x16 multiply where the product is stored in a 16 bit number. Regards, Bruno De Paoli