This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Mali Offline Compiler - arithmetics cycles vs texture cycles

So as its written in documentation and explained in some sources, whenever you work with mali offline compiler - you need to focus on stage which has the highest score in output from Mali first (I.e. arithmetics/load storage or texture stage)

One thing I noticed is that in pretty much any shader texture unit is never a bottleneck.
Example:

Hardware: Mali-T720 r1p1
Architecture: Midgard
Driver: r23p0-00rel0
Shader type: OpenGL ES Fragment

Main shader
===========

Work registers: 4
Uniform registers: 1
Stack spilling: false

                                A      LS       T    Bound
Total instruction cycles:   22.00    1.00    5.00        A
Shortest path cycles:       10.75    1.00    5.00        A
Longest path cycles:        10.75    1.00    5.00        A

A = Arithmetic, LS = Load/Store, T = Texture


Every texture instruction usually takes one cycle (on Midgard at least). 

So if you add another texture - you need to do something with it - blend it with other computation at least - it means arithmetics cycles will go up as well.. So as I said - texture cycles are like never higher than other columns.
So when I work on optimizing shaders - my current intuition is to still be quite agressive and try to reduce texture fetches as much as possible. And usually I don't tradeoff arithmetics and texture fetches - i.e. I don't move computation from arithmetcs to baked texture unless it's something very expensive.

Another thing: Mali offline compiler makes assumption that texture fetch is bilinear and texture has mipmaps.
We currently mostly use bilinear filtering without mipmaps on mobile. 
Rationale: when you start using mipmaps - you also need trilinear filtering, otherwise transition between mipmaps levels become visible.
Trilinear filtering means - double the cycles and also more memory throughput is needed (fetching 8 texels instead of 4 for bilinear).
On the other hand not using mipmaps means poor cache utilization which also means - more memory throughput is needed. No idea what's better in practice. I guess depends on the project/hardware. Or is there a universal answer?

And also fetching texture means latency, this latency is hidden to some degree if shader use relatively small amount but I assume it's still there.

Once I switch to another project in the company, I'll have time to do extensive tests related to the cost of textures and hopefully build some intuition.
As I am impatient and curious, I do hope other more experienced devs will share their intuition here.

So my questions:
1. Is it good strategy to aggressively optimize out texture fetches and treat them as very expensive thing (even if it's not a bottleneck by Mali offline compiler). Should I adjust score by Mali offline compiler, i.e. multiply it by 2  (so it's trilinear) or maybe I should use GPU profiler and look at some GPU metrics like memory throughput to make final decision? How do you do it in practice? 

2. Bilinear no mipmaps vs Trilinear mipmaps - what do you think is better in practice? How do you choose what to use? Does it depend on hardware maybe? We do need to support Midgard devices (we support very old devices, we're mobile development company) 
 
3. If you can share with me any links/books/resources explaining anything above which might help me - please do share them as well. I already read official mali documentation and optimization guides.

Parents
  • Once warp is blocked - hardware swaps it to other warp (and saves its registers into registers storage, which can become full so then hardware will have to wait)

    Each shader core has capacity and register storage for hundreds of concurrent threads, so if one thread blocks the hardware can just select another one to run. No save/restore needed - it's an instant zero-cost switch.

    So by saying "texturing will run in parallel to the arithmetic" you mean that arithmetic unit will execute one warp while texturing unit will execute different warp and load/storage will execute third warp and so on - so different stages are more or less always busy with executing different warps - that's the way how latency is hidden.

    Yes, that's the general idea.

    2. Can you disclose size of cache lines and count of cycles to fetch texture in case of cache miss? 

    For line size, assuming 64 bytes is a good starting point for planning purposes (for both CPU and GPU).

    The latency of a cache miss - tens of cycles if you hit in L2, hundreds of cycles if you end up in DRAM. However, not that GPUs can hide most cache miss latency - we can just pick another thread to run that has data available.

Reply
  • Once warp is blocked - hardware swaps it to other warp (and saves its registers into registers storage, which can become full so then hardware will have to wait)

    Each shader core has capacity and register storage for hundreds of concurrent threads, so if one thread blocks the hardware can just select another one to run. No save/restore needed - it's an instant zero-cost switch.

    So by saying "texturing will run in parallel to the arithmetic" you mean that arithmetic unit will execute one warp while texturing unit will execute different warp and load/storage will execute third warp and so on - so different stages are more or less always busy with executing different warps - that's the way how latency is hidden.

    Yes, that's the general idea.

    2. Can you disclose size of cache lines and count of cycles to fetch texture in case of cache miss? 

    For line size, assuming 64 bytes is a good starting point for planning purposes (for both CPU and GPU).

    The latency of a cache miss - tens of cycles if you hit in L2, hundreds of cycles if you end up in DRAM. However, not that GPUs can hide most cache miss latency - we can just pick another thread to run that has data available.

Children