Arm Community
Arm Community
  • Site
  • User
  • Site
  • Search
  • User
Arm Community blogs
Arm Community blogs
Architectures and Processors blog The features I would like in my ARM processor (part 1)
  • Blogs
  • Mentions
  • Sub-Groups
  • Tags
  • Jump...
  • Cancel
More blogs in Arm Community blogs
  • AI blog

  • Announcements

  • Architectures and Processors blog

  • Automotive blog

  • Embedded and Microcontrollers blog

  • Internet of Things (IoT) blog

  • Laptops and Desktops blog

  • Mobile, Graphics, and Gaming blog

  • Operating Systems blog

  • Servers and Cloud Computing blog

  • SoC Design and Simulation blog

  • Tools, Software and IDEs blog

Tell us what you think
Tags
  • features
  • Processor
  • Architecture
  • Cortex-R
  • brainstorm
  • Microcontroller
  • Cortex-A
  • Cortex-M
  • cpu
  • ideas
Actions
  • RSS
  • More
  • Cancel
Related blog posts
Related forum threads

The features I would like in my ARM processor (part 1)

Jens Bauer
Jens Bauer
November 21, 2013
1 minute read time.

Originally this blog post was intended to be all-in-one, but I was suggested to split it into smaller parts.

So what I'll do, is that I'll mention the features I'd like in my ARM processor, one at a time, piece by piece.

The purpose of this, is to throw in new ideas (good and bad) to the ARM engineers.

-Features, that may be able to make a difference, especially features, which would help the soft- and hardware developers in getting to new places.

Now let's start...

128-bit floating point registers.

Currently, the only processor I know of, that supports 128-bit floating point calculation, is the PowerPC (combining two 64-bit registers).

If we had 128-bit floating point registers, we could calculate precision math very quickly.

Where is it needed ?

  • Physics engines; often vector units only supports up to 32-bit single-precision floating point calculations, which is not enough for these things.
  • Real-life physics calculations and simuations (airodynamics and the like)
  • Advanced compression engines; increasing compression ratio and speeding up compression.
  • 3D compression of 2D movies (using a 3D computer model represented as 2D) would make movie file sizes much smaller and perhaps quicker to decompress.
  • When not doing calculations, the FPU can be used for moving data quickly (as usual).

What would I use it for ?

I'd use such feature to make billions of planet gravity calculations per second.

These mainly include multiply and add, subtract and square-root calculations.

Having a high precision vector unit would definitely make insane performance boosts here.

I know we will get there some day (after Cortex-A57), but the sooner we'll get it, the sooner we'll get the cool end-results.

Perhaps it'll be the next Cortex-A, which can deliver an impressive performance when it comes to precision math, opening up further possibilities.

What would you use it for ?

If you had a 128-bit precision floating point unit, what would you use it for - or what kind of things do you think it could be used for ?

Anonymous
Parents
  • daith
    daith over 11 years ago

    IBM implement 128 bit in decimal floating point as well as binary. In some ways the decimal floating point is more saleable in that it can be used for financial transactions easily. The scaling might not be often used but when places have high inflation you can easily break the long int limit. And you'd also really want to be able to convert to packed decimal for COBOL, how long before ARM starts running COBOL programs? ;-)

    For large calculations 128 bit binary floating point can be very useful, it is amazing how hard it is to even get a decent algorithm to find the roots of a quadratic accurately with the way errors can get magnified. They would also make it much easier to get the last bit exact in the maths library routines operating on doubles.

    Now if we're talking about the moon how about some direct memory under the SIMD unit turning it into an old style bit array processor like the

    ICL Distributed Array Processor - Wikipedia, the free encyclopedia

    where all 32 registers would be changed at once, that would make it a supercomputer!

    or for something a bit cheaper and more practical how about support for fast programmable associative look up which either quickly finds a match or lets the user fill an entry if it doesn't, this might help things like hash tables, dynamic languages method lookup or JIT branches.

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
Comment
  • daith
    daith over 11 years ago

    IBM implement 128 bit in decimal floating point as well as binary. In some ways the decimal floating point is more saleable in that it can be used for financial transactions easily. The scaling might not be often used but when places have high inflation you can easily break the long int limit. And you'd also really want to be able to convert to packed decimal for COBOL, how long before ARM starts running COBOL programs? ;-)

    For large calculations 128 bit binary floating point can be very useful, it is amazing how hard it is to even get a decent algorithm to find the roots of a quadratic accurately with the way errors can get magnified. They would also make it much easier to get the last bit exact in the maths library routines operating on doubles.

    Now if we're talking about the moon how about some direct memory under the SIMD unit turning it into an old style bit array processor like the

    ICL Distributed Array Processor - Wikipedia, the free encyclopedia

    where all 32 registers would be changed at once, that would make it a supercomputer!

    or for something a bit cheaper and more practical how about support for fast programmable associative look up which either quickly finds a match or lets the user fill an entry if it doesn't, this might help things like hash tables, dynamic languages method lookup or JIT branches.

    • Cancel
    • Up 0 Down
    • Reply
    • More
    • Cancel
Children
No Data
Architectures and Processors blog
  • Scalable Matrix Extension: Expanding the Arm Intrinsics Search Engine

    Chris Walsh
    Chris Walsh
    Arm is pleased to announce that the Arm Intrinsics Search Engine has been updated to include the Scalable Matrix Extension (SME) intrinsics, including both SME and SME2 intrinsics.
    • October 3, 2025
  • Arm A-Profile Architecture developments 2025

    Martin Weidmann
    Martin Weidmann
    Each year, Arm publishes updates to the A-Profile architecture alongside full Instruction Set and System Register documentation. In 2025, the update is Armv9.7-A.
    • October 2, 2025
  • When a barrier does not block: The pitfalls of partial order

    Wathsala Vithanage
    Wathsala Vithanage
    Acquire fences aren’t always enough. See how LDAPR exposed unsafe interleavings and what we did to patch the problem.
    • September 15, 2025