mcpu settings to cover most popular systems for a binary distribution of Julia


We're trying to figure out an optimally small yet performant set of gcc builds for shipping arm binaries of Julia (

Previously we've built two binaries with:



which has provided good broad support, but we had questions about whether performance can be improved, given speed improvements seen when building natively with `mcpu=native`.

The systems we're particularly interested in supporting are:

- Raspberry Pi 4 (cortex-a72)

- Nvidia jetson TX1, Nano (cortex-a57 / Denver2 v8)

- Nvidia Xavier NX (Nvidia carmel v8.2)

We generally advise building locally with `mcpu=native`, but can improvements be made for our binaries?



p.s. not sure if this is the right forum for this question

More questions in this forum